Sample records for computing problems presentation

  1. Exact parallel algorithms for some members of the traveling salesman problem family

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pekny, J.F.

    1989-01-01

    The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less

  2. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  3. Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    1985-01-01

    Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…

  4. Students' Mathematics Word Problem-Solving Achievement in a Computer-Based Story

    ERIC Educational Resources Information Center

    Gunbas, N.

    2015-01-01

    The purpose of this study was to investigate the effect of a computer-based story, which was designed in anchored instruction framework, on sixth-grade students' mathematics word problem-solving achievement. Problems were embedded in a story presented on a computer as computer story, and then compared with the paper-based version of the same story…

  5. Heterogeneity in Health Care Computing Environments

    PubMed Central

    Sengupta, Soumitra

    1989-01-01

    This paper discusses issues of heterogeneity in computer systems, networks, databases, and presentation techniques, and the problems it creates in developing integrated medical information systems. The need for institutional, comprehensive goals are emphasized. Using the Columbia-Presbyterian Medical Center's computing environment as the case study, various steps to solve the heterogeneity problem are presented.

  6. Application of evolutionary computation in ECAD problems

    NASA Astrophysics Data System (ADS)

    Lee, Dae-Hyun; Hwang, Seung H.

    1998-10-01

    Design of modern electronic system is a complicated task which demands the use of computer- aided design (CAD) tools. Since a lot of problems in ECAD are combinatorial optimization problems, evolutionary computations such as genetic algorithms and evolutionary programming have been widely employed to solve those problems. We have applied evolutionary computation techniques to solve ECAD problems such as technology mapping, microcode-bit optimization, data path ordering and peak power estimation, where their benefits are well observed. This paper presents experiences and discusses issues in those applications.

  7. Benchmarking optimization software with COPS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, E.D.; More, J.J.

    2001-01-08

    The COPS test set provides a modest selection of difficult nonlinearly constrained optimization problems from applications in optimal design, fluid dynamics, parameter estimation, and optimal control. In this report we describe version 2.0 of the COPS problems. The formulation and discretization of the original problems have been streamlined and improved. We have also added new problems. The presentation of COPS follows the original report, but the description of the problems has been streamlined. For each problem we discuss the formulation of the problem and the structural data in Table 0.1 on the formulation. The aim of presenting this data ismore » to provide an approximate idea of the size and sparsity of the problem. We also include the results of computational experiments with the LANCELOT, LOQO, MINOS, and SNOPT solvers. These computational experiments differ from the original results in that we have deleted problems that were considered to be too easy. Moreover, in the current version of the computational experiments, each problem is tested with four variations. An important difference between this report and the original report is that the tables that present the computational experiments are generated automatically from the testing script. This is explained in more detail in the report.« less

  8. Evoking Knowledge and Information Awareness for Enhancing Computer-Supported Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Engelmann, Tanja; Tergan, Sigmar-Olaf; Hesse, Friedrich W.

    2010-01-01

    Computer-supported collaboration by spatially distributed group members still involves interaction problems within the group. This article presents an empirical study investigating the question of whether computer-supported collaborative problem solving by spatially distributed group members can be fostered by evoking knowledge and information…

  9. A cross-disciplinary introduction to quantum annealing-based algorithms

    NASA Astrophysics Data System (ADS)

    Venegas-Andraca, Salvador E.; Cruz-Santos, William; McGeoch, Catherine; Lanzagorta, Marco

    2018-04-01

    A central goal in quantum computing is the development of quantum hardware and quantum algorithms in order to analyse challenging scientific and engineering problems. Research in quantum computation involves contributions from both physics and computer science; hence this article presents a concise introduction to basic concepts from both fields that are used in annealing-based quantum computation, an alternative to the more familiar quantum gate model. We introduce some concepts from computer science required to define difficult computational problems and to realise the potential relevance of quantum algorithms to find novel solutions to those problems. We introduce the structure of quantum annealing-based algorithms as well as two examples of this kind of algorithms for solving instances of the max-SAT and Minimum Multicut problems. An overview of the quantum annealing systems manufactured by D-Wave Systems is also presented.

  10. Parallel Computation of Unsteady Flows on a Network of Workstations

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  11. Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…

  12. The application of generalized, cyclic, and modified numerical integration algorithms to problems of satellite orbit computation

    NASA Technical Reports Server (NTRS)

    Chesler, L.; Pierce, S.

    1971-01-01

    Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.

  13. The Language Factor in Elementary Mathematics Assessments: Computational Skills and Applied Problem Solving in a Multidimensional IRT Framework

    ERIC Educational Resources Information Center

    Hickendorff, Marian

    2013-01-01

    The results of an exploratory study into measurement of elementary mathematics ability are presented. The focus is on the abilities involved in solving standard computation problems on the one hand and problems presented in a realistic context on the other. The objectives were to assess to what extent these abilities are shared or distinct, and…

  14. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  15. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  16. Optimal Control of Thermo--Fluid Phenomena in Variable Domains

    NASA Astrophysics Data System (ADS)

    Volkov, Oleg; Protas, Bartosz

    2008-11-01

    This presentation concerns our continued research on adjoint--based optimization of viscous incompressible flows (the Navier--Stokes problem) coupled with heat conduction involving change of phase (the Stefan problem), and occurring in domains with variable boundaries. This problem is motivated by optimization of advanced welding techniques used in automotive manufacturing, where the goal is to determine an optimal heat input, so as to obtain a desired shape of the weld pool surface upon solidification. We argue that computation of sensitivities (gradients) in such free--boundary problems requires the use of the shape--differential calculus as a key ingredient. We also show that, with such tools available, the computational solution of the direct and inverse (optimization) problems can in fact be achieved in a similar manner and in a comparable computational time. Our presentation will address certain mathematical and computational aspects of the method. As an illustration we will consider the two--phase Stefan problem with contact point singularities where our approach allows us to obtain a thermodynamically consistent solution.

  17. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  18. Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W. (Editor); Hardin, J. C. (Editor)

    1997-01-01

    The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.

  19. Non-steady state modelling of wheel-rail contact problem

    NASA Astrophysics Data System (ADS)

    Guiral, A.; Alonso, A.; Baeza, L.; Giménez, J. G.

    2013-01-01

    Among all the algorithms to solve the wheel-rail contact problem, Kalker's FastSim has become the most useful computation tool since it combines a low computational cost and enough precision for most of the typical railway dynamics problems. However, some types of dynamic problems require the use of a non-steady state analysis. Alonso and Giménez developed a non-stationary method based on FastSim, which provides both, sufficiently accurate results and a low computational cost. However, it presents some limitations; the method is developed for one time-dependent creepage and its accuracy for varying normal forces has not been checked. This article presents the required changes in order to deal with both problems and compares its results with those given by Kalker's Variational Method for rolling contact.

  20. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  1. Computing Evans functions numerically via boundary-value problems

    NASA Astrophysics Data System (ADS)

    Barker, Blake; Nguyen, Rose; Sandstede, Björn; Ventura, Nathaniel; Wahl, Colin

    2018-03-01

    The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.

  2. Use of a Computer Simulation To Develop Mental Simulations for Understanding Relative Motion Concepts.

    ERIC Educational Resources Information Center

    Monaghan, James M.; Clement, John

    1999-01-01

    Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…

  3. REACTT: an algorithm for solving spatial equilibrium problems.

    Treesearch

    D.J. Brooks; J. Kincaid

    1987-01-01

    The problem of determining equilibrium prices and quantities in spatially separated markets is reviewed. Algorithms that compute spatial equilibria are discussed. A computer program using the reactive programming algorithm for solving spatial equilibrium problems that involve multiple commodities is presented, along with detailed documentation. A sample data set,...

  4. Computer Applications for Alternative Assessment: An Instructional and Organization Dilemma.

    ERIC Educational Resources Information Center

    Mills, Ed; Brown, John A.

    1997-01-01

    Describes the possibilities and problems that computer-generated portfolios will soon present to instructors across America. Highlights include the history of portfolio assessment, logistical problems of handling portfolios in the traditional three-ring binder format, use of the zip drive for storage, and software/hardware compatibility problems.…

  5. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  6. A hybrid computer program for rapidly solving flowing or static chemical kinetic problems involving many chemical species

    NASA Technical Reports Server (NTRS)

    Mclain, A. G.; Rao, C. S. R.

    1976-01-01

    A hybrid chemical kinetic computer program was assembled which provides a rapid solution to problems involving flowing or static, chemically reacting, gas mixtures. The computer program uses existing subroutines for problem setup, initialization, and preliminary calculations and incorporates a stiff ordinary differential equation solution technique. A number of check cases were recomputed with the hybrid program and the results were almost identical to those previously obtained. The computational time saving was demonstrated with a propane-oxygen-argon shock tube combustion problem involving 31 chemical species and 64 reactions. Information is presented to enable potential users to prepare an input data deck for the calculation of a problem.

  7. Computational Fluency and Strategy Choice Predict Individual and Cross-National Differences in Complex Arithmetic

    ERIC Educational Resources Information Center

    Vasilyeva, Marina; Laski, Elida V.; Shen, Chen

    2015-01-01

    The present study tested the hypothesis that children's fluency with basic number facts and knowledge of computational strategies, derived from early arithmetic experience, predicts their performance on complex arithmetic problems. First-grade students from United States and Taiwan (N = 152, mean age: 7.3 years) were presented with problems that…

  8. Optical solver of combinatorial problems: nanotechnological approach.

    PubMed

    Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor

    2013-09-01

    We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.

  9. Computational fluency and strategy choice predict individual and cross-national differences in complex arithmetic.

    PubMed

    Vasilyeva, Marina; Laski, Elida V; Shen, Chen

    2015-10-01

    The present study tested the hypothesis that children's fluency with basic number facts and knowledge of computational strategies, derived from early arithmetic experience, predicts their performance on complex arithmetic problems. First-grade students from United States and Taiwan (N = 152, mean age: 7.3 years) were presented with problems that differed in difficulty: single-, mixed-, and double-digit addition. Children's strategy use varied as a function of problem difficulty, consistent with Siegler's theory of strategy choice. The use of decomposition strategy interacted with computational fluency in predicting the accuracy of double-digit addition. Further, the frequency of decomposition and computational fluency fully mediated cross-national differences in accuracy on these complex arithmetic problems. The results indicate the importance of both fluency with basic number facts and the decomposition strategy for later arithmetic performance. (c) 2015 APA, all rights reserved).

  10. Two Quantum Protocols for Oblivious Set-member Decision Problem

    NASA Astrophysics Data System (ADS)

    Shi, Run-Hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-10-01

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology.

  11. Two Quantum Protocols for Oblivious Set-member Decision Problem

    PubMed Central

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-01-01

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology. PMID:26514668

  12. Two Quantum Protocols for Oblivious Set-member Decision Problem.

    PubMed

    Shi, Run-Hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-10-30

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology.

  13. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  14. Benchmark problems and solutions

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1995-01-01

    The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.

  15. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    PubMed

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  16. Chess games: a model for RNA based computation.

    PubMed

    Cukras, A R; Faulhammer, D; Lipton, R J; Landweber, L F

    1999-10-01

    Here we develop the theory of RNA computing and a method for solving the 'knight problem' as an instance of a satisfiability (SAT) problem. Using only biological molecules and enzymes as tools, we developed an algorithm for solving the knight problem (3 x 3 chess board) using a 10-bit combinatorial pool and sequential RNase H digestions. The results of preliminary experiments presented here reveal that the protocol recovers far more correct solutions than expected at random, but the persistence of errors still presents the greatest challenge.

  17. Human factors in the presentation of computer-generated information - Aspects of design and application in automated flight traffic

    NASA Technical Reports Server (NTRS)

    Roske-Hofstrand, Renate J.

    1990-01-01

    The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.

  18. Computer software documentation

    NASA Technical Reports Server (NTRS)

    Comella, P. A.

    1973-01-01

    A tutorial in the documentation of computer software is presented. It presents a methodology for achieving an adequate level of documentation as a natural outgrowth of the total programming effort commencing with the initial problem statement and definition and terminating with the final verification of code. It discusses the content of adequate documentation, the necessity for such documentation and the problems impeding achievement of adequate documentation.

  19. Performance analysis of parallel branch and bound search with the hypercube architecture

    NASA Technical Reports Server (NTRS)

    Mraz, Richard T.

    1987-01-01

    With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

  20. Computer analysis of multicircuit shells of revolution by the field method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1975-01-01

    The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.

  1. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  2. Interactive Computer Based Assessment Tasks: How Problem-Solving Process Data Can Inform Instruction

    ERIC Educational Resources Information Center

    Zoanetti, Nathan

    2010-01-01

    This article presents key steps in the design and analysis of a computer based problem-solving assessment featuring interactive tasks. The purpose of the assessment is to support targeted instruction for students by diagnosing strengths and weaknesses at different stages of problem-solving. The first focus of this article is the task piloting…

  3. Nonstationary homogeneous nucleation

    NASA Technical Reports Server (NTRS)

    Harstad, K. G.

    1974-01-01

    The theory of homogeneous condensation is reviewed and equations describing this process are presented. Numerical computer solutions to transient problems in nucleation (relaxation to steady state) are presented and compared to a prior computation.

  4. Substructure analysis using NICE/SPAR and applications of force to linear and nonlinear structures. [spacecraft masts

    NASA Technical Reports Server (NTRS)

    Razzaq, Zia; Prasad, Venkatesh; Darbhamulla, Siva Prasad; Bhati, Ravinder; Lin, Cai

    1987-01-01

    Parallel computing studies are presented for a variety of structural analysis problems. Included are the substructure planar analysis of rectangular panels with and without a hole, the static analysis of space mast, using NICE/SPAR and FORCE, and substructure analysis of plane rigid-jointed frames using FORCE. The computations are carried out on the Flex/32 MultiComputer using one to eighteen processors. The NICE/SPAR runstream samples are documented for the panel problem. For the substructure analysis of plane frames, a computer program is developed to demonstrate the effectiveness of a substructuring technique when FORCE is enforced. Ongoing research activities for an elasto-plastic stability analysis problem using FORCE, and stability analysis of the focus problem using NICE/SPAR are briefly summarized. Speedup curves for the panel, the mast, and the frame problems provide a basic understanding of the effectiveness of parallel computing procedures utilized or developed, within the domain of the parameters considered. Although the speedup curves obtained exhibit various levels of computational efficiency, they clearly demonstrate the excellent promise which parallel computing holds for the structural analysis problem. Source code is given for the elasto-plastic stability problem and the FORCE program.

  5. Data Processing: Fifteen Suggestions for Computer Training in Your Business Education Classes.

    ERIC Educational Resources Information Center

    Barr, Lowell L.

    1980-01-01

    Presents 15 suggestions for training business education students in the use of computers. Suggestions involve computer language, method of presentation, laboratory time, programing assignments, instructions and handouts, problem solving, deadlines, reviews, programming concepts, programming logic, documentation, and defensive programming. (CT)

  6. On the Use of Computers for Teaching Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Several approaches for improving the teaching of basic fluid mechanics using computers are presented. There are two objectives to these approaches: to increase the involvement of the student in the learning process and to present information to the student in a variety of forms. Items discussed include: the preparation of educational videos using the results of computational fluid dynamics (CFD) calculations, the analysis of CFD flow solutions using workstation based post-processing graphics packages, and the development of workstation or personal computer based simulators which behave like desk top wind tunnels. Examples of these approaches are presented along with observations from working with undergraduate co-ops. Possible problems in the implementation of these approaches as well as solutions to these problems are also discussed.

  7. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  8. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  9. Automatically Generated Algorithms for the Vertex Coloring Problem

    PubMed Central

    Contreras Bolton, Carlos; Gatica, Gustavo; Parada, Víctor

    2013-01-01

    The vertex coloring problem is a classical problem in combinatorial optimization that consists of assigning a color to each vertex of a graph such that no adjacent vertices share the same color, minimizing the number of colors used. Despite the various practical applications that exist for this problem, its NP-hardness still represents a computational challenge. Some of the best computational results obtained for this problem are consequences of hybridizing the various known heuristics. Automatically revising the space constituted by combining these techniques to find the most adequate combination has received less attention. In this paper, we propose exploring the heuristics space for the vertex coloring problem using evolutionary algorithms. We automatically generate three new algorithms by combining elementary heuristics. To evaluate the new algorithms, a computational experiment was performed that allowed comparing them numerically with existing heuristics. The obtained algorithms present an average 29.97% relative error, while four other heuristics selected from the literature present a 59.73% error, considering 29 of the more difficult instances in the DIMACS benchmark. PMID:23516506

  10. Preliminary Analysis of a Randomized Trial of Computer Attention Training in Children with Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Steiner, N.; Sidhu, T. K.; Frenette, E. C.; Mitchell, K.; Perrin, E. C.

    2011-01-01

    Clinically significant attention problems among children present a significant obstacle to increasing student achievement. Computer-based attention training holds great promise as a way for schools to address this problem. The aim of this project is to evaluate the efficacy of two computer-based attention training systems in schools. One program…

  11. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  12. Planning Readings: A Comparative Exploration of Basic Algorithms

    ERIC Educational Resources Information Center

    Piater, Justus H.

    2009-01-01

    Conventional introduction to computer science presents individual algorithmic paradigms in the context of specific, prototypical problems. To complement this algorithm-centric instruction, this study additionally advocates problem-centric instruction. I present an original problem drawn from students' life that is simply stated but provides rich…

  13. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  14. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    PubMed

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  15. Design Optimization Programmable Calculators versus Campus Computers.

    ERIC Educational Resources Information Center

    Savage, Michael

    1982-01-01

    A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)

  16. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  17. Towards a theory of automated elliptic mesh generation

    NASA Technical Reports Server (NTRS)

    Cordova, J. Q.

    1992-01-01

    The theory of elliptic mesh generation is reviewed and the fundamental problem of constructing computational space is discussed. It is argued that the construction of computational space is an NP-Complete problem and therefore requires a nonstandard approach for its solution. This leads to the development of graph-theoretic, combinatorial optimization and integer programming algorithms. Methods for the construction of two dimensional computational space are presented.

  18. Nurturing Students' Problem-Solving Skills and Engagement in Computer-Mediated Communications (CMC)

    ERIC Educational Resources Information Center

    Chen, Ching-Huei

    2014-01-01

    The present study sought to investigate how to enhance students' well- and ill-structured problem-solving skills and increase productive engagement in computer-mediated communication with the assistance of external prompts, namely procedural and reflection. Thirty-three graduate students were randomly assigned to two conditions: procedural and…

  19. Universal algorithms and programs for calculating the motion parameters in the two-body problem

    NASA Technical Reports Server (NTRS)

    Bakhshiyan, B. T.; Sukhanov, A. A.

    1979-01-01

    The algorithms and FORTRAN programs for computing positions and velocities, orbital elements and first and second partial derivatives in the two-body problem are presented. The algorithms are applicable for any value of eccentricity and are convenient for computing various navigation parameters.

  20. Effect of local minima on adiabatic quantum optimization.

    PubMed

    Amin, M H S

    2008-04-04

    We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.

  1. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  2. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  3. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  4. Computational methods for inverse problems in geophysics: inversion of travel time observations

    USGS Publications Warehouse

    Pereyra, V.; Keller, H.B.; Lee, W.H.K.

    1980-01-01

    General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.

  5. Viscous flow computations using a second-order upwind differencing scheme

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.

    1988-01-01

    In the present computations of a wide range of fluid flow problems by means of the primitive variables-incorporating Navier-Stokes equations, a mixed second-order upwinding scheme approximates the convective terms of the transport equations and the scheme's accuracy is verified for convection-dominated high Re number flow problems. An adaptive dissipation scheme is used as a monotonic supersonic shock flow capture mechanism. Many benchmark fluid flow problems, including the compressible and incompressible, laminar and turbulent, over a wide range of M and Re numbers, are presently studied to verify the accuracy and robustness of this numerical method.

  6. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 1, Theoretical background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, D.K.

    The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.

  7. Decision making and problem solving with computer assistance

    NASA Technical Reports Server (NTRS)

    Kraiss, F.

    1980-01-01

    In modern guidance and control systems, the human as manager, supervisor, decision maker, problem solver and trouble shooter, often has to cope with a marginal mental workload. To improve this situation, computers should be used to reduce the operator from mental stress. This should not solely be done by increased automation, but by a reasonable sharing of tasks in a human-computer team, where the computer supports the human intelligence. Recent developments in this area are summarized. It is shown that interactive support of operator by intelligent computer is feasible during information evaluation, decision making and problem solving. The applied artificial intelligence algorithms comprehend pattern recognition and classification, adaptation and machine learning as well as dynamic and heuristic programming. Elementary examples are presented to explain basic principles.

  8. Some Student Problems: Bungi Jumping, Maglev Trains, and Misaligned Computer Monitors.

    ERIC Educational Resources Information Center

    Whineray, Scott

    1991-01-01

    Presented are three physics problems from the New Zealand Entrance Scholarship examinations which are generally attempted by more able students. Problem situations, illustrations, and solutions are detailed. (CW)

  9. Current problems in applied mathematics and mathematical physics

    NASA Astrophysics Data System (ADS)

    Samarskii, A. A.

    Papers are presented on such topics as mathematical models in immunology, mathematical problems of medical computer tomography, classical orthogonal polynomials depending on a discrete variable, and boundary layer methods for singular perturbation problems in partial derivatives. Consideration is also given to the computer simulation of supernova explosion, nonstationary internal waves in a stratified fluid, the description of turbulent flows by unsteady solutions of the Navier-Stokes equations, and the reduced Galerkin method for external diffraction problems using the spline approximation of fields.

  10. Numerical methods in Markov chain modeling

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  11. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  12. Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2004-01-01

    This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.

  13. Total variation-based neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  14. Children's Computers.

    ERIC Educational Resources Information Center

    Samaras, Anastasia P.

    1996-01-01

    Suggests that teachers and social context determine what young children acquire from computer experiences. Provides anecdotes of teachers working with children who are using a computer program to complete a picture puzzle. The computer allowed teachers to present a problem, witness children's cognitive capabilities, listen to their metacognitive…

  15. Effects of computing time delay on real-time control systems

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  16. Children and Computers Abstracts.

    ERIC Educational Resources Information Center

    Rothenberg, Dianne, Ed.

    1992-01-01

    Abstracts of reports of eight research studies on computer uses in children's education are presented. Topics covered include (1) LOGO computer language; (2) computer graphics for art instruction; (3) animation; (4) problem solving; (5) children's use of symbols; (6) an evaluation of a Chapter 1 program involving children's computer use; (7) peer…

  17. Simulation of Stagnation Region Heating in Hypersonic Flow on Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2007-01-01

    Hypersonic flow simulations using the node based, unstructured grid code FUN3D are presented. Applications include simple (cylinder) and complex (towed ballute) configurations. Emphasis throughout is on computation of stagnation region heating in hypersonic flow on tetrahedral grids. Hypersonic flow over a cylinder provides a simple test problem for exposing any flaws in a simulation algorithm with regard to its ability to compute accurate heating on such grids. Such flaws predominantly derive from the quality of the captured shock. The importance of pure tetrahedral formulations are discussed. Algorithm adjustments for the baseline Roe / Symmetric, Total-Variation-Diminishing (STVD) formulation to deal with simulation accuracy are presented. Formulations of surface normal gradients to compute heating and diffusion to the surface as needed for a radiative equilibrium wall boundary condition and finite catalytic wall boundary in the node-based unstructured environment are developed. A satisfactory resolution of the heating problem on tetrahedral grids is not realized here; however, a definition of a test problem, and discussion of observed algorithm behaviors to date are presented in order to promote further research on this important problem.

  18. Computational aerodynamics requirements: The future role of the computer and the needs of the aerospace industry

    NASA Technical Reports Server (NTRS)

    Rubbert, P. E.

    1978-01-01

    The commercial airplane builder's viewpoint on the important issues involved in the development of improved computational aerodynamics tools such as powerful computers optimized for fluid flow problems is presented. The primary user of computational aerodynamics in a commercial aircraft company is the design engineer who is concerned with solving practical engineering problems. From his viewpoint, the development of program interfaces and pre-and post-processing capability for new computational methods is just as important as the algorithms and machine architecture. As more and more details of the entire flow field are computed, the visibility of the output data becomes a major problem which is then doubled when a design capability is added. The user must be able to see, understand, and interpret the results calculated. Enormous costs are expanded because of the need to work with programs having only primitive user interfaces.

  19. Investigation and Implementation of Matrix Permanent Algorithms for Identity Resolution

    DTIC Science & Technology

    2014-12-01

    calculation of the permanent of a matrix whose dimension is a function of target count [21]. However, the optimal approach for computing the permanent is...presently unclear. The primary objective of this project was to determine the optimal computing strategy(-ies) for the matrix permanent in tactical and...solving various combinatorial problems (see [16] for details and appli- cations to a wide variety of problems) and thus can be applied to compute a

  20. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  1. Computers and Hot Potatoes: Starch for Teacher Preparation Diets.

    ERIC Educational Resources Information Center

    Johnson, Jerry

    1984-01-01

    Computers present a problem for mathematics teachers that may be solved through teacher education programs. Classroom teachers should be competent in programing languages, exploring software, and understanding the emphasis of computers in the mathematics curriculum. (DF)

  2. Computational Benefits Using an Advanced Concatenation Scheme Based on Reduced Order Models for RF Structures

    NASA Astrophysics Data System (ADS)

    Heller, Johann; Flisgen, Thomas; van Rienen, Ursula

    The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.

  3. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  4. Numerical Inverse Scattering for the Toda Lattice

    NASA Astrophysics Data System (ADS)

    Bilman, Deniz; Trogdon, Thomas

    2017-06-01

    We present a method to compute the inverse scattering transform (IST) for the famed Toda lattice by solving the associated Riemann-Hilbert (RH) problem numerically. Deformations for the RH problem are incorporated so that the IST can be evaluated in O(1) operations for arbitrary points in the ( n, t)-domain, including short- and long-time regimes. No time-stepping is required to compute the solution because ( n, t) appear as parameters in the associated RH problem. The solution of the Toda lattice is computed in long-time asymptotic regions where the asymptotics are not known rigorously.

  5. Numerical solutions of acoustic wave propagation problems using Euler computations

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.

    1984-01-01

    This paper reports solution procedures for problems arising from the study of engine inlet wave propagation. The first problem is the study of sound waves radiated from cylindrical inlets. The second one is a quasi-one-dimensional problem to study the effect of nonlinearities and the third one is the study of nonlinearities in two dimensions. In all three problems Euler computations are done with a fourth-order explicit scheme. For the first problem results are shown in agreement with experimental data and for the second problem comparisons are made with an existing asymptotic theory. The third problem is part of an ongoing work and preliminary results are presented for this case.

  6. An approach to quantum-computational hydrologic inverse analysis

    DOE PAGES

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  7. An approach to quantum-computational hydrologic inverse analysis.

    PubMed

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  8. An approach to quantum-computational hydrologic inverse analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Daniel

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  9. Computation of Transonic Nozzle Sound Transmission and Rotor Problems by the Dispersion-Relation-Preserving Scheme

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Aganin, Alexei

    2000-01-01

    The transonic nozzle transmission problem and the open rotor noise radiation problem are solved computationally. Both are multiple length scales problems. For efficient and accurate numerical simulation, the multiple-size-mesh multiple-time-step Dispersion-Relation-Preserving scheme is used to calculate the time periodic solution. To ensure an accurate solution, high quality numerical boundary conditions are also needed. For the nozzle problem, a set of nonhomogeneous, outflow boundary conditions are required. The nonhomogeneous boundary conditions not only generate the incoming sound waves but also, at the same time, allow the reflected acoustic waves and entropy waves, if present, to exit the computation domain without reflection. For the open rotor problem, there is an apparent singularity at the axis of rotation. An analytic extension approach is developed to provide a high quality axis boundary treatment.

  10. Effect of virtual memory on efficient solution of two model problems

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.

    1977-01-01

    Computers with virtual memory architecture allow programs to be written as if they were small enough to be contained in memory. Two types of problems are investigated to show that this luxury can lead to quite an inefficient performance if the programmer does not interact strongly with the characteristics of the operating system when developing the program. The two problems considered are the simultaneous solutions of a large linear system of equations by Gaussian elimination and a model three-dimensional finite-difference problem. The Control Data STAR-100 computer runs are made to demonstrate the inefficiencies of programming the problems in the manner one would naturally do if the problems were indeed, small enough to be contained in memory. Program redesigns are presented which achieve large improvements in performance through changes in the computational procedure and the data base arrangement.

  11. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  12. A new robust algorithm for computation of a triangle circumscribed sphere in E3 and a hypersphere simplex

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many applications in which a bounding sphere containing the given triangle E3 is needed, e.g. fast collision detection, ray-triangle intersecting in raytracing etc. This is a typical geometrical problem in E3 and it has also applications in computational problems in general. In this paper a new fast and robust algorithm of circumscribed sphere computation in the n-dimensional space is presented and specification for the E3 space is given, too. The presented method is convenient for use on GPU or with SSE or Intel's AVX instructions on a standard CPU.

  13. Lattice gas methods for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1995-01-01

    This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

  14. Computation of aeroelastic characteristics and stress-strained state of parachutes

    NASA Astrophysics Data System (ADS)

    Dneprov, Igor'v.

    The paper presents computation results of the stress-strained state and aeroelastic characteristics of different types of parachutes in the process of their interaction with a flow. Simulation of the aerodynamic part of the aeroelastic problem is based on the discrete vortex method, while the elastic part of the problem is solved by employing either the finite element method, or the finite difference method. The research covers the following problems of the axisymmetric parachutes dynamic aeroelasticity: parachute inflation, forebody influence on the aerodynamic characteristics of the object-parachute system, parachute disreefing, parachute inflation in the presence of the engagement parachute. The paper also presents the solution of the spatial problem of static aeroelasticity for a single-envelope ram-air parachute. Some practical recommendations are suggested.

  15. Artificial intelligence and design: Opportunities, research problems and directions

    NASA Technical Reports Server (NTRS)

    Amarel, Saul

    1990-01-01

    The issues of industrial productivity and economic competitiveness are of major significance in the U.S. at present. By advancing the science of design, and by creating a broad computer-based methodology for automating the design of artifacts and of industrial processes, we can attain dramatic improvements in productivity. It is our thesis that developments in computer science, especially in Artificial Intelligence (AI) and in related areas of advanced computing, provide us with a unique opportunity to push beyond the present level of computer aided automation technology and to attain substantial advances in the understanding and mechanization of design processes. To attain these goals, we need to build on top of the present state of AI, and to accelerate research and development in areas that are especially relevant to design problems of realistic complexity. We propose an approach to the special challenges in this area, which combines 'core work' in AI with the development of systems for handling significant design tasks. We discuss the general nature of design problems, the scientific issues involved in studying them with the help of AI approaches, and the methodological/technical issues that one must face in developing AI systems for handling advanced design tasks. Looking at basic work in AI from the perspective of design automation, we identify a number of research problems that need special attention. These include finding solution methods for handling multiple interacting goals, formation problems, problem decompositions, and redesign problems; choosing representations for design problems with emphasis on the concept of a design record; and developing approaches for the acquisition and structuring of domain knowledge with emphasis on finding useful approximations to domain theories. Progress in handling these research problems will have major impact both on our understanding of design processes and their automation, and also on several fundamental questions that are of intrinsic concern to AI. We present examples of current AI work on specific design tasks, and discuss new directions of research, both as extensions of current work and in the context of new design tasks where domain knowledge is either intractable or incomplete. The domains discussed include Digital Circuit Design, Mechanical Design of Rotational Transmissions, Design of Computer Architectures, Marine Design, Aircraft Design, and Design of Chemical Processes and Materials. Work in these domains is significant on technical grounds, and it is also important for economic and policy reasons.

  16. Perceived problems with computer gaming and internet use among adolescents: measurement tool for non-clinical survey studies

    PubMed Central

    2014-01-01

    Background Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer use and gaming among adolescents and to study the association between screen time and perceived problems. Methods Cross-sectional school-survey of 11-, 13-, and 15-year old students in thirteen schools in the City of Aarhus, Denmark, participation rate 89%, n = 2100. The main exposure was time spend on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. Results The three new indexes showed high face validity and acceptable internal consistency. Most schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. Conclusion The three new measures of perceived problems related to computer and console gaming and internet use among adolescents are appropriate, reliable and valid for use in non-clinical surveys about young people’s everyday life and behaviour. These new measures do not assess Internet Gaming Disorder as it is listed in the DSM and therefore has no parity with DSM criteria. We found an increasing risk of perceived problems with increasing time spent with gaming and internet use. Nevertheless, most schoolchildren who spent much time with gaming and internet use did not experience problems. PMID:24731270

  17. Reconfigurability in MDO Problem Synthesis. Part 1

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2004-01-01

    Integrating autonomous disciplines into a problem amenable to solution presents a major challenge in realistic multidisciplinary design optimization (MDO). We propose a linguistic approach to MDO problem description, formulation, and solution we call reconfigurable multidisciplinary synthesis (REMS). With assistance from computer science techniques, REMS comprises an abstract language and a collection of processes that provide a means for dynamic reasoning about MDO problems in a range of contexts. The approach may be summarized as follows. Description of disciplinary data according to the rules of a grammar, followed by lexical analysis and compilation, yields basic computational components that can be assembled into various MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. The range of contexts for reasoning about MDO spans tasks from error checking and derivative computation to formulation and reformulation of optimization problem statements. In highly structured contexts, reconfigurability can mean a straightforward transformation among problem formulations with a single operation. We hope that REMS will enable experimentation with a variety of problem formulations in research environments, assist in the assembly of MDO test problems, and serve as a pre-processor in computational frameworks in production environments. This paper, Part 1 of two companion papers, discusses the fundamentals of REMS. Part 2 illustrates the methodology in more detail.

  18. Reconfigurability in MDO Problem Synthesis. Part 2

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2004-01-01

    Integrating autonomous disciplines into a problem amenable to solution presents a major challenge in realistic multidisciplinary design optimization (MDO). We propose a linguistic approach to MDO problem description, formulation, and solution we call reconfigurable multidisciplinary synthesis (REMS). With assistance from computer science techniques, REMS comprises an abstract language and a collection of processes that provide a means for dynamic reasoning about MDO problems in a range of contexts. The approach may be summarized as follows. Description of disciplinary data according to the rules of a grammar, followed by lexical analysis and compilation, yields basic computational components that can be assembled into various MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. The range of contexts for reasoning about MDO spans tasks from error checking and derivative computation to formulation and reformulation of optimization problem statements. In highly structured contexts, reconfigurability can mean a straightforward transformation among problem formulations with a single operation. We hope that REMS will enable experimentation with a variety of problem formulations in research environments, assist in the assembly of MDO test problems, and serve as a pre-processor in computational frameworks in production environments. Part 1 of two companion papers, discusses the fundamentals of REMS. This paper, Part 2 illustrates the methodology in more detail.

  19. A computational study of the discretization error in the solution of the Spencer-Lewis equation by doubling applied to the upwind finite-difference approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, P.; Seth, D.L.; Ray, A.K.

    A detailed and systematic study of the nature of the discretization error associated with the upwind finite-difference method is presented. A basic model problem has been identified and based upon the results for this problem, a basic hypothesis regarding the accuracy of the computational solution of the Spencer-Lewis equation is formulated. The basic hypothesis is then tested under various systematic single complexifications of the basic model problem. The results of these tests provide the framework of the refined hypothesis presented in the concluding comments. 27 refs., 3 figs., 14 tabs.

  20. Evaluation of Mathematical Self-Explanations with LSA in a Counterintuitive Problem of Probabilities

    ERIC Educational Resources Information Center

    Guiu, Jordi Maja

    2012-01-01

    In this paper different type of mathematical explanations are presented in relation to the mathematical problem of probabilities Monty Hall (card version) and the computational tool Latent Semantic Analyses (LSA) is used. At the moment the results in the literature about this computational tool to study texts show that this technique is…

  1. Suggestions for Content Selection and Presentation in High School Computer Textbooks

    ERIC Educational Resources Information Center

    Lin, Janet Mei-Chuen; Wu, Cheng-Chih

    2007-01-01

    Based on the findings from reviewing 32 textbooks in the past four years for Taiwan's Ministry of Education, we have identified common problems in the reviewed textbooks and analyzed their inadequacies. Typical problems include the Wintel bias, too much coverage of software application tools and too little of computer science concepts, too many…

  2. Evaluation of the Effectiveness of a Tablet Computer Application (App) in Helping Students with Visual Impairments Solve Mathematics Problems

    ERIC Educational Resources Information Center

    Beal, Carole R.; Rosenblum, L. Penny

    2018-01-01

    Introduction: The authors examined a tablet computer application (iPad app) for its effectiveness in helping students studying prealgebra to solve mathematical word problems. Methods: Forty-three visually impaired students (that is, those who are blind or have low vision) completed eight alternating mathematics units presented using their…

  3. The Problem of Defining Intelligence.

    ERIC Educational Resources Information Center

    Lubar, David

    1981-01-01

    The major philosophical issues surrounding the concept of intelligence are reviewed with respect to the problems surrounding the process of defining and developing artificial intelligence (AI) in computers. Various current definitions and problems with these definitions are presented. (MP)

  4. Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun

    1997-01-01

    Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.

  5. Linear solver performance in elastoplastic problem solution on GPU cluster

    NASA Astrophysics Data System (ADS)

    Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.

    2017-12-01

    Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.

  6. Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms

    NASA Astrophysics Data System (ADS)

    Johansson, Niklas; Larsson, Jan-Åke

    2017-09-01

    A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.

  7. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  8. The Multiple-Minima Problem in Protein Folding

    NASA Astrophysics Data System (ADS)

    Scheraga, Harold A.

    1991-10-01

    The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.

  9. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  10. Medical Information Processing by Computer.

    ERIC Educational Resources Information Center

    Kleinmuntz, Benjamin

    The use of the computer for medical information processing was introduced about a decade ago. Considerable inroads have now been made toward its applications to problems in medicine. Present uses of the computer, both as a computational and noncomputational device include the following: automated search of patients' files; on-line clinical data…

  11. A detailed experimental study of a DNA computer with two endonucleases.

    PubMed

    Sakowski, Sebastian; Krasiński, Tadeusz; Sarnik, Joanna; Blasiak, Janusz; Waldmajer, Jacek; Poplawski, Tomasz

    2017-07-14

    Great advances in biotechnology have allowed the construction of a computer from DNA. One of the proposed solutions is a biomolecular finite automaton, a simple two-state DNA computer without memory, which was presented by Ehud Shapiro's group at the Weizmann Institute of Science. The main problem with this computer, in which biomolecules carry out logical operations, is its complexity - increasing the number of states of biomolecular automata. In this study, we constructed (in laboratory conditions) a six-state DNA computer that uses two endonucleases (e.g. AcuI and BbvI) and a ligase. We have presented a detailed experimental verification of its feasibility. We described the effect of the number of states, the length of input data, and the nondeterminism on the computing process. We also tested different automata (with three, four, and six states) running on various accepted input words of different lengths such as ab, aab, aaab, ababa, and of an unaccepted word ba. Moreover, this article presents the reaction optimization and the methods of eliminating certain biochemical problems occurring in the implementation of a biomolecular DNA automaton based on two endonucleases.

  12. A boundary element alternating method for two-dimensional mixed-mode fracture problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Krishnamurthy, T.

    1992-01-01

    A boundary element alternating method, denoted herein as BEAM, is presented for two dimensional fracture problems. This is an iterative method which alternates between two solutions. An analytical solution for arbitrary polynomial normal and tangential pressure distributions applied to the crack faces of an embedded crack in an infinite plate is used as the fundamental solution in the alternating method. A boundary element method for an uncracked finite plate is the second solution. For problems of edge cracks a technique of utilizing finite elements with BEAM is presented to overcome the inherent singularity in boundary element stress calculation near the boundaries. Several computational aspects that make the algorithm efficient are presented. Finally, the BEAM is applied to a variety of two dimensional crack problems with different configurations and loadings to assess the validity of the method. The method gives accurate stress intensity factors with minimal computing effort.

  13. Dynamic interactions in neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbib, M.A.; Amari, S.

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  14. Computers and the Multiplicity of Polynomial Roots.

    ERIC Educational Resources Information Center

    Wavrik, John J.

    1982-01-01

    Described are stages in the development of a computer program to solve a particular algebra problem and the nature of algebraic computation is presented. A program in BASIC is provided to give ideas to others for developing their own programs. (MP)

  15. Computation of forces from deformed visco-elastic biological tissues

    NASA Astrophysics Data System (ADS)

    Muñoz, José J.; Amat, David; Conte, Vito

    2018-04-01

    We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.

  16. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  17. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  18. Computer problem-solving coaches for introductory physics: Design and usability studies

    NASA Astrophysics Data System (ADS)

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-06-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.

  19. Computer-Based Mathematics Instructions for Engineering Students

    NASA Technical Reports Server (NTRS)

    Khan, Mustaq A.; Wall, Curtiss E.

    1996-01-01

    Almost every engineering course involves mathematics in one form or another. The analytical process of developing mathematical models is very important for engineering students. However, the computational process involved in the solution of some mathematical problems may be very tedious and time consuming. There is a significant amount of mathematical software such as Mathematica, Mathcad, and Maple designed to aid in the solution of these instructional problems. The use of these packages in classroom teaching can greatly enhance understanding, and save time. Integration of computer technology in mathematics classes, without de-emphasizing the traditional analytical aspects of teaching, has proven very successful and is becoming almost essential. Sample computer laboratory modules are developed for presentation in the classroom setting. This is accomplished through the use of overhead projectors linked to graphing calculators and computers. Model problems are carefully selected from different areas.

  20. Goals and Objectives for Computing in the Associated Colleges of the St. Lawrence Valley.

    ERIC Educational Resources Information Center

    Grupe, Fritz H.

    A forecast of the computing requirements of the Associated Colleges of the St. Lawrence Valley, an analysis of their needs, and specifications for a joint computer system are presented. Problems encountered included the lack of resources and computer sophistication at the member schools and a dearth of experience with long-term computer consortium…

  1. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  2. Application of Real-World Problems in Computer Science Education: Teachers' Beliefs, Motivational Orientations and Practices

    ERIC Educational Resources Information Center

    Ferreira, Deller James; Ambrósio, Ana Paula Laboissière; Melo, Tatiane F. N.

    2018-01-01

    This article describes how it is due to the fact that computer science is present in many activities of daily life, students need to develop skills to solve problems to improve the lives of people in general. This article investigates correlations between teachers' motivational orientations, beliefs and practices with respect to the application of…

  3. Higher-order adaptive finite-element methods for Kohn–Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motamarri, P.; Nowak, M.R.; Leiter, K.

    2013-11-15

    We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system containing 1688 atoms using modest computational resources, and good scalability of the present implementation up to 192 processors.« less

  4. On some Aitken-like acceleration of the Schwarz method

    NASA Astrophysics Data System (ADS)

    Garbey, M.; Tromeur-Dervout, D.

    2002-12-01

    In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.

  5. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  6. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  7. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  8. Supervisors with Micros: Trends and Training Needs.

    ERIC Educational Resources Information Center

    Bryan, Leslie A., Jr.

    1986-01-01

    Results of a study conducted by Purdue University concerning the use of computers by supervisors in manufacturing firms are presented and discussed. Examines access to computers, minicomputers versus mainframes, training time on computers, replacement of staff, creation of personnel problems, and training methods. (CT)

  9. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  10. Final Technical Report: Sparse Grid Scenario Generation and Interior Algorithms for Stochastic Optimization in a Parallel Computing Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrotra, Sanjay

    2016-09-07

    The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting ourmore » main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.« less

  11. Parallelization of the preconditioned IDR solver for modern multicore computer systems

    NASA Astrophysics Data System (ADS)

    Bessonov, O. A.; Fedoseyev, A. I.

    2012-10-01

    This paper present the analysis, parallelization and optimization approach for the large sparse matrix solver CNSPACK for modern multicore microprocessors. CNSPACK is an advanced solver successfully used for coupled solution of stiff problems arising in multiphysics applications such as CFD, semiconductor transport, kinetic and quantum problems. It employs iterative IDR algorithm with ILU preconditioning (user chosen ILU preconditioning order). CNSPACK has been successfully used during last decade for solving problems in several application areas, including fluid dynamics and semiconductor device simulation. However, there was a dramatic change in processor architectures and computer system organization in recent years. Due to this, performance criteria and methods have been revisited, together with involving the parallelization of the solver and preconditioner using Open MP environment. Results of the successful implementation for efficient parallelization are presented for the most advances computer system (Intel Core i7-9xx or two-processor Xeon 55xx/56xx).

  12. Combining destination diversion decisions and critical in-flight event diagnosis in computer aided testing of pilots

    NASA Technical Reports Server (NTRS)

    Rockwell, T. H.; Giffin, W. C.; Romer, D. J.

    1984-01-01

    Rockwell and Giffin (1982) and Giffin and Rockwell (1983) have discussed the use of computer aided testing (CAT) in the study of pilot response to critical in-flight events. The present investigation represents an extension of these earlier studies. In testing pilot responses to critical in-flight events, use is made of a Plato-touch CRT system operating on a menu based format. In connection with the typical diagnostic problem, the pilot was presented with symptoms within a flight scenario. In one problem, the pilot has four minutes for obtaining the information which is needed to make a diagnosis of the problem. In the reported research, the attempt has been made to combine both diagnosis and diversion scenario into a single computer aided test. Tests with nine subjects were conducted. The obtained results and their significance are discussed.

  13. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    ERIC Educational Resources Information Center

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  14. Computational Algorithmization: Limitations in Problem Solving Skills in Computational Sciences Majors at University of Oriente

    ERIC Educational Resources Information Center

    Castillo, Antonio S.; Berenguer, Isabel A.; Sánchez, Alexander G.; Álvarez, Tomás R. R.

    2017-01-01

    This paper analyzes the results of a diagnostic study carried out with second year students of the computational sciences majors at University of Oriente, Cuba, to determine the limitations that they present in computational algorithmization. An exploratory research was developed using quantitative and qualitative methods. The results allowed…

  15. Some Thoughts Regarding Practical Quantum Computing

    NASA Astrophysics Data System (ADS)

    Ghoshal, Debabrata; Gomez, Richard; Lanzagorta, Marco; Uhlmann, Jeffrey

    2006-03-01

    Quantum computing has become an important area of research in computer science because of its potential to provide more efficient algorithmic solutions to certain problems than are possible with classical computing. The ability of performing parallel operations over an exponentially large computational space has proved to be the main advantage of the quantum computing model. In this regard, we are particularly interested in the potential applications of quantum computers to enhance real software systems of interest to the defense, industrial, scientific and financial communities. However, while much has been written in popular and scientific literature about the benefits of the quantum computational model, several of the problems associated to the practical implementation of real-life complex software systems in quantum computers are often ignored. In this presentation we will argue that practical quantum computation is not as straightforward as commonly advertised, even if the technological problems associated to the manufacturing and engineering of large-scale quantum registers were solved overnight. We will discuss some of the frequently overlooked difficulties that plague quantum computing in the areas of memories, I/O, addressing schemes, compilers, oracles, approximate information copying, logical debugging, error correction and fault-tolerant computing protocols.

  16. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  17. A Computational Procedure for Identifying Bilinear Representations of Nonlinear Systems Using Volterra Kernels

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.; Silva, Walter A.

    2008-01-01

    A computational procedure for identifying the state-space matrices corresponding to discrete bilinear representations of nonlinear systems is presented. A key feature of the method is the use of first- and second-order Volterra kernels (first- and second-order pulse responses) to characterize the system. The present method is based on an extension of a continuous-time bilinear system identification procedure given in a 1971 paper by Bruni, di Pillo, and Koch. The analytical and computational considerations that underlie the original procedure and its extension to the title problem are presented and described, pertinent numerical considerations associated with the process are discussed, and results obtained from the application of the method to a variety of nonlinear problems from the literature are presented. The results of these exploratory numerical studies are decidedly promising and provide sufficient credibility for further examination of the applicability of the method.

  18. Troubleshooting Computer Problems--a Teachers' Guide.

    ERIC Educational Resources Information Center

    Zeitz, Leigh

    1995-01-01

    Presents a troubleshooting flow chart for teachers and others to use when trying to figure out why their computers do not work correctly. Written mainly for Macintosh computers, the purpose of this guide is to save school technology coordinators time and to help educate teachers. (Author/LRW)

  19. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  20. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  1. Computation of multi-dimensional viscous supersonic flow

    NASA Technical Reports Server (NTRS)

    Buggeln, R. C.; Kim, Y. N.; Mcdonald, H.

    1986-01-01

    A method has been developed for two- and three-dimensional computations of viscous supersonic jet flows interacting with an external flow. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases associated with supersonic jet flow is presented and compared with other calculations for axisymmetric cases. Demonstration calculations indicate that the computational technique has great promise as a tool for calculating a wide range of supersonic flow problems including jet flow. Finally, a User's Manual is presented for the computer code used to perform the calculations.

  2. Numerical Problems and Agent-Based Models for a Mass Transfer Course

    ERIC Educational Resources Information Center

    Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

    2009-01-01

    Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

  3. Programmable Calculator Use in Undergraduate Dynamics, Vibrations, and Elementary Structures Courses.

    ERIC Educational Resources Information Center

    Cutchins, M. A.

    1982-01-01

    Presents programmable calculator solutions to selected problems, including area moments of inertia and principal values, the 2-D principal stress problem, C.G. and pitch inertia computations, 3-D eigenvalue problems, 3 DOF vibrations, and a complex flutter determinant. (SK)

  4. A Comparison of Solver Performance for Complex Gastric Electrophysiology Models

    PubMed Central

    Sathar, Shameer; Cheng, Leo K.; Trew, Mark L.

    2016-01-01

    Computational techniques for solving systems of equations arising in gastric electrophysiology have not been studied for efficient solution process. We present a computationally challenging problem of simulating gastric electrophysiology in anatomically realistic stomach geometries with multiple intracellular and extracellular domains. The multiscale nature of the problem and mesh resolution required to capture geometric and functional features necessitates efficient solution methods if the problem is to be tractable. In this study, we investigated and compared several parallel preconditioners for the linear systems arising from tetrahedral discretisation of electrically isotropic and anisotropic problems, with and without stimuli. The results showed that the isotropic problem was computationally less challenging than the anisotropic problem and that the application of extracellular stimuli increased workload considerably. Preconditioning based on block Jacobi and algebraic multigrid solvers were found to have the best overall solution times and least iteration counts, respectively. The algebraic multigrid preconditioner would be expected to perform better on large problems. PMID:26736543

  5. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  6. Acquisition of ICU data: concepts and demands.

    PubMed

    Imhoff, M

    1992-12-01

    As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  8. Perm State University HPC-hardware and software services: capabilities for aircraft engine aeroacoustics problems solving

    NASA Astrophysics Data System (ADS)

    Demenev, A. G.

    2018-02-01

    The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.

  9. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  10. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  11. Evolutionary inference via the Poisson Indel Process

    PubMed Central

    Bouchard-Côté, Alexandre; Jordan, Michael I.

    2013-01-01

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296

  12. Mathematical computer programs: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Computer programs, routines, and subroutines for aiding engineers, scientists, and mathematicians in direct problem solving are presented. Also included is a group of items that affords the same users greater flexibility in the use of software.

  13. Least-squares Legendre spectral element solutions to sound propagation problems.

    PubMed

    Lin, W H

    2001-02-01

    This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes.

  14. Supersonic nonlinear potential analysis

    NASA Technical Reports Server (NTRS)

    Siclari, M. J.

    1984-01-01

    The NCOREL computer code was established to compute supersonic flow fields of wings and bodies. The method encompasses an implicit finite difference transonic relaxation method to solve the full potential equation in a spherical coordinate system. Two basic topic to broaden the applicability and usefulness of the present method which is encompassed within the computer code NCOREL for the treatment of supersonic flow problems were studied. The first topic is that of computing efficiency. Accelerated schemes are in use for transonic flow problems. One such scheme is the approximate factorization (AF) method and an AF scheme to the supersonic flow problem is developed. The second topic is the computation of wake flows. The proper modeling of wake flows is important for multicomponent configurations such as wing-body and multiple lifting surfaces where the wake of one lifting surface has a pronounced effect on a downstream body or other lifting surfaces.

  15. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  16. CSM solutions of rotating blade dynamics using integrating matrices

    NASA Technical Reports Server (NTRS)

    Lakin, William D.

    1992-01-01

    The dynamic behavior of flexible rotating beams continues to receive considerable research attention as it constitutes a fundamental problem in applied mechanics. Further, beams comprise parts of many rotating structures of engineering significance. A topic of particular interest at the present time involves the development of techniques for obtaining the behavior in both space and time of a rotor acted upon by a simple airload loading. Most current work on problems of this type use solution techniques based on normal modes. It is certainly true that normal modes cannot be disregarded, as knowledge of natural blade frequencies is always important. However, the present work has considered a computational structural mechanics (CSM) approach to rotor blade dynamics problems in which the physical properties of the rotor blade provide input for a direct numerical solution of the relevant boundary-and-initial-value problem. Analysis of the dynamics of a given rotor system may require solution of the governing equations over a long time interval corresponding to many revolutions of the loaded flexible blade. For this reason, most of the common techniques in computational mechanics, which treat the space-time behavior concurrently, cannot be applied to the rotor dynamics problem without a large expenditure of computational resources. By contrast, the integrating matrix technique of computational mechanics has the ability to consistently incorporate boundary conditions and 'remove' dependence on a space variable. For problems involving both space and time, this feature of the integrating matrix approach thus can generate a 'splitting' which forms the basis of an efficient CSM method for numerical solution of rotor dynamics problems.

  17. Graphics and composite material computer program enhancements for SPAR

    NASA Technical Reports Server (NTRS)

    Farley, G. L.; Baker, D. J.

    1980-01-01

    User documentation is provided for additional computer programs developed for use in conjunction with SPAR. These programs plot digital data, simplify input for composite material section properties, and compute lamina stresses and strains. Sample problems are presented including execution procedures, program input, and graphical output.

  18. A Software Laboratory Environment for Computer-Based Problem Solving.

    ERIC Educational Resources Information Center

    Kurtz, Barry L.; O'Neal, Micheal B.

    This paper describes a National Science Foundation-sponsored project at Louisiana Technological University to develop computer-based laboratories for "hands-on" introductions to major topics of computer science. The underlying strategy is to develop structured laboratory environments that present abstract concepts through the use of…

  19. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  20. Multi-objective reverse logistics model for integrated computer waste management.

    PubMed

    Ahluwalia, Poonam Khanijo; Nema, Arvind K

    2006-12-01

    This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.

  1. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  2. Human-computer interfaces applied to numerical solution of the Plateau problem

    NASA Astrophysics Data System (ADS)

    Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério

    2015-09-01

    In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.

  3. An Advanced User Interface Approach for Complex Parameter Study Process Specification in the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  4. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  5. Multi-period project portfolio selection under risk considerations and stochastic income

    NASA Astrophysics Data System (ADS)

    Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood

    2018-02-01

    This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.

  6. Continued research on selected parameters to minimize community annoyance from airplane noise

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1981-01-01

    Results from continued research on selected parameters to minimize community annoyance from airport noise are reported. First, a review of the initial work on this problem is presented. Then the research focus is expanded by considering multiobjective optimization approaches for this problem. A multiobjective optimization algorithm review from the open literature is presented. This is followed by the multiobjective mathematical formulation for the problem of interest. A discussion of the appropriate solution algorithm for the multiobjective formulation is conducted. Alternate formulations and associated solution algorithms are discussed and evaluated for this airport noise problem. Selected solution algorithms that have been implemented are then used to produce computational results for example airports. These computations involved finding the optimal operating scenario for a moderate size airport and a series of sensitivity analyses for a smaller example airport.

  7. CMOS VLSI Layout and Verification of a SIMD Computer

    NASA Technical Reports Server (NTRS)

    Zheng, Jianqing

    1996-01-01

    A CMOS VLSI layout and verification of a 3 x 3 processor parallel computer has been completed. The layout was done using the MAGIC tool and the verification using HSPICE. Suggestions for expanding the computer into a million processor network are presented. Many problems that might be encountered when implementing a massively parallel computer are discussed.

  8. Symbolic computation of the Birkhoff normal form in the problem of stability of the triangular libration points

    NASA Astrophysics Data System (ADS)

    Shevchenko, I. I.

    2008-05-01

    The problem of stability of the triangular libration points in the planar circular restricted three-body problem is considered. A software package, intended for normalization of autonomous Hamiltonian systems by means of computer algebra, is designed so that normalization problems of high analytical complexity could be solved. It is used to obtain the Birkhoff normal form of the Hamiltonian in the given problem. The normalization is carried out up to the 6th order of expansion of the Hamiltonian in the coordinates and momenta. Analytical expressions for the coefficients of the normal form of the 6th order are derived. Though intermediary expressions occupy gigabytes of the computer memory, the obtained coefficients of the normal form are compact enough for presentation in typographic format. The analogue of the Deprit formula for the stability criterion is derived in the 6th order of normalization. The obtained floating-point numerical values for the normal form coefficients and the stability criterion confirm the results by Markeev (1969) and Coppola and Rand (1989), while the obtained analytical and exact numeric expressions confirm the results by Meyer and Schmidt (1986) and Schmidt (1989). The given computational problem is solved without constructing a specialized algebraic processor, i.e., the designed computer algebra package has a broad field of applicability.

  9. Technology, attributions, and emotions in post-secondary education: An application of Weiner’s attribution theory to academic computing problems

    PubMed Central

    Hall, Nathan C.; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students’ computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner’s (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study. PMID:29529039

  10. Technology, attributions, and emotions in post-secondary education: An application of Weiner's attribution theory to academic computing problems.

    PubMed

    Maymon, Rebecca; Hall, Nathan C; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students' computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner's (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study.

  11. Sequential Test Strategies for Multiple Fault Isolation

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

    1997-01-01

    In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

  12. Solving Hard Computational Problems Efficiently: Asymptotic Parametric Complexity 3-Coloring Algorithm

    PubMed Central

    Martín H., José Antonio

    2013-01-01

    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711

  13. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  14. Computer-aided design and computer science technology

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Voigt, S. J.

    1976-01-01

    A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.

  15. Inverse kinematics of a dual linear actuator pitch/roll heliostat

    NASA Astrophysics Data System (ADS)

    Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh

    2017-06-01

    This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.

  16. Studying Parental Decision Making with Micro-Computers: The CPSI Technique.

    ERIC Educational Resources Information Center

    Holden, George W.

    A technique for studying how parents think, make decisions, and solve childrearing problems, Computer-Presented Social Interactions (CPSI), is described. Two studies involving CPSI are presented. The first study concerns a common parental cognitive task: causal analysis of an undesired behavior. The task was to diagnose the cause of non-contingent…

  17. Presentation Trainer: What Experts and Computers Can Tell about Your Nonverbal Communication

    ERIC Educational Resources Information Center

    Schneider, J.; Börner, D.; van Rosmalen, P.; Specht, M.

    2017-01-01

    The ability to present effectively is essential for professionals; therefore, oral communication courses have become part of the curricula for higher education studies. However, speaking in public is still a challenge for many graduates. To tackle this problem, driven by the recent advances in computer vision techniques and prosody analysis,…

  18. A Computer Simulation for Teaching Diagnosis of Secondary Ignition Problems

    ERIC Educational Resources Information Center

    Diedrick, Walter; Thomas, Rex

    1977-01-01

    Presents the methodology and findings of an experimental project to determine the viability of computer assisted as opposed to more traditional methods of instruction for teaching one phase of automotive troubleshooting. (Editor)

  19. Aerodynamic Analyses Requiring Advanced Computers, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers given at the conference present the results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include two-dimensional configurations, three-dimensional configurations, transonic aircraft, and the space shuttle.

  20. Computer Aided Braille Trainer

    PubMed Central

    Sibert, Thomas W.

    1984-01-01

    The problems involved in teaching visually impaired persons to Braille are numerous. Training while the individual is still sighted and using a computer to assist is one way of shortening the learning curve. Such a solution is presented here.

  1. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  2. The Reality of National Computer Networking for Higher Education. Proceedings of the 1978 EDUCOM Fall Conference. EDUCOM Series in Computing and Telecommunications in Higher Education 3.

    ERIC Educational Resources Information Center

    Emery, James C., Ed.

    A comprehensive review of the current status, prospects, and problems of computer networking in higher education is presented from the perspectives of both computer users and network suppliers. Several areas of computer use are considered including applications for instruction, research, and administration in colleges and universities. In the…

  3. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  4. Computer memory: the LLL experience. [Octopus computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1976-02-01

    Those aspects of Octopus computer network design are reviewed that relate to memory and storage. Emphasis is placed on the difficulties and problems that arise because of the limitations of present storage devices, and indications are made of the directions in which technological advance could be of most value. (auth)

  5. THE COMPUTER AND THE ARCHITECTURAL PROFESSION.

    ERIC Educational Resources Information Center

    HAVILAND, DAVID S.

    THE ROLE OF ADVANCING TECHNOLOGY IN THE FIELD OF ARCHITECTURE IS DISCUSSED IN THIS REPORT. PROBLEMS IN COMMUNICATION AND THE DESIGN PROCESS ARE IDENTIFIED. ADVANTAGES AND DISADVANTAGES OF COMPUTERS ARE MENTIONED IN RELATION TO MAN AND MACHINE INTERACTION. PRESENT AND FUTURE IMPLICATIONS OF COMPUTER USAGE ARE IDENTIFIED AND DISCUSSED WITH RESPECT…

  6. Mathematical Modeling and Computational Thinking

    ERIC Educational Resources Information Center

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  7. 2016 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Jim; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  8. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  9. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  10. Cumulative reports and publications through December 31, 1989

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A complete list of reports from the Institute for Computer Applications in Science and Engineering (ICASE) is presented. The major categories of the current ICASE research program are: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effectual numerical methods; computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, structural analysis, and chemistry; computer systems and software, especially vector and parallel computers, microcomputers, and data management. Since ICASE reports are intended to be preprints of articles that will appear in journals or conference proceedings, the published reference is included when it is available.

  11. Computing aggregate properties of preimages for 2D cellular automata.

    PubMed

    Beer, Randall D

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm-incremental aggregation-that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  12. Computing aggregate properties of preimages for 2D cellular automata

    NASA Astrophysics Data System (ADS)

    Beer, Randall D.

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm—incremental aggregation—that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  13. Numerical modeling of the atmosphere with an isentropic vertical coordinate

    NASA Technical Reports Server (NTRS)

    Hsu, Yueh-Jiuan G.; Arakawa, Akio

    1990-01-01

    A theta-coordinate model simulating the nonlinear evolution of a baroclinic wave is presented. In the model, vertical discretization maintains important integral constraints such as conservation of the angular momentum and total energy. A massless-layer approach is used in the treatment of the intersections of coordinate surfaces with the lower boundary. This formally eliminates the intersection problem, but raises other computational problems. Horizontal discretization of the continuity and momentum equations in the model are designed to overcome these problems. Selected results from a 10-day integration with the 25-layer, beta-plane version of the model are presented. It is concluded that the model can simulate the nonlinear evolution of a baroclinic wave and associated dynamical processes without major computational difficulties.

  14. Mathematical modeling of heat transfer problems in the permafrost

    NASA Astrophysics Data System (ADS)

    Gornov, V. F.; Stepanov, S. P.; Vasilyeva, M. V.; Vasilyev, V. I.

    2014-11-01

    In this work we present results of numerical simulation of three-dimensional temperature fields in soils for various applied problems: the railway line in the conditions of permafrost for different geometries, the horizontal tunnel underground storage and greenhouses of various designs in the Far North. Mathematical model of the process is described by a nonstationary heat equation with phase transitions of pore water. The numerical realization of the problem is based on the finite element method using a library of scientific computing FEniCS. For numerical calculations we use high-performance computing systems.

  15. The Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Littman, M. G.

    1986-01-01

    The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.

  16. Optimal reconfiguration strategy for a degradable multimodule computing system

    NASA Technical Reports Server (NTRS)

    Lee, Yann-Hang; Shin, Kang G.

    1987-01-01

    The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.

  17. Skylab-EREP studies in computer mapping of terrain in the Cripple Creek-Canon City area of Colorado

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.; Ranson, K. J.; Holstrom, R. L.

    1975-01-01

    Multispectral-scanner data from satellites are used as input to computers for automatically mapping terrain classes of ground cover. Some major problems faced in this remote-sensing task include: (1) the effect of mixtures of classes and, primarily because of mixtures, the problem of what constitutes accurate control data, and (2) effects of the atmosphere on spectral responses. The fundamental principles of these problems are presented along with results of studies of them for a test site of Colorado, using LANDSAT-1 data.

  18. Computation of rotor-stator interaction using the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Whitfield, David L.; Chen, Jen-Ping

    1995-01-01

    The numerical scheme presented belongs to a family of codes known as UNCLE (UNsteady Computation of fieLd Equations) as reported by Whitfield (1995), that is being used to solve problems in a variety of areas including compressible and incompressible flows. This derivation is specifically developed for general unsteady multi-blade-row turbomachinery problems. The scheme solves the Reynolds-averaged N-S equations with the Baldwin-Lomax turbulence model.

  19. Pareto Joint Inversion of Love and Quasi Rayleigh's waves - synthetic study

    NASA Astrophysics Data System (ADS)

    Bogacz, Adrian; Dalton, David; Danek, Tomasz; Miernik, Katarzyna; Slawinski, Michael A.

    2017-04-01

    In this contribution the specific application of Pareto joint inversion in solving geophysical problem is presented. Pareto criterion combine with Particle Swarm Optimization were used to solve geophysical inverse problems for Love and Quasi Rayleigh's waves. Basic theory of forward problem calculation for chosen surface waves is described. To avoid computational problems some simplification were made. This operation allowed foster and more straightforward calculation without lost of solution generality. According to the solving scheme restrictions, considered model must have exact two layers, elastic isotropic surface layer and elastic isotropic half space with infinite thickness. The aim of the inversion is to obain elastic parameters and model geometry using dispersion data. In calculations different case were considered, such as different number of modes for different wave types and different frequencies. Created solutions are using OpenMP standard for parallel computing, which help in reduction of computational times. The results of experimental computations are presented and commented. This research was performed in the context of The Geomechanics Project supported by Husky Energy. Also, this research was partially supported by the Natural Sciences and Engineering Research Council of Canada, grant 238416-2013, and by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.

  20. An Achievement Degree Analysis Approach to Identifying Learning Problems in Object-Oriented Programming

    ERIC Educational Resources Information Center

    Allinjawi, Arwa A.; Al-Nuaim, Hana A.; Krause, Paul

    2014-01-01

    Students often face difficulties while learning object-oriented programming (OOP) concepts. Many papers have presented various assessment methods for diagnosing learning problems to improve the teaching of programming in computer science (CS) higher education. The research presented in this article illustrates that although max-min composition is…

  1. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  2. Communications oriented programming of parallel iterative solutions of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Patrick, M. L.; Pratt, T. W.

    1986-01-01

    Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.

  3. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Andrs, David; Martineau, Richard Charles

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less

  4. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  5. Distributed memory compiler design for sparse problems

    NASA Technical Reports Server (NTRS)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  6. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  7. Aerodynamic Analyses Requiring Advanced Computers, Part 1

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers are presented which deal with results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include: viscous flows, boundary layer equations, turbulence modeling and Navier-Stokes equations, and internal flows.

  8. Fluid dynamics computer programs for NERVA turbopump

    NASA Technical Reports Server (NTRS)

    Brunner, J. J.

    1972-01-01

    During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.

  9. Relationship of Selected Abilities to Problem Solving Performance.

    ERIC Educational Resources Information Center

    Harmel, Sarah Jane

    This study investigated five ability tests related to the water-jug problem. Previous analyses identified two processes used during solution: means-ends analysis and memory of visited states. Subjects were 240 undergraduate psychology students. A real-time computer system presented the problem and recorded responses. Ability tests were paper and…

  10. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  11. Solving multiconstraint assignment problems using learning automata.

    PubMed

    Horn, Geir; Oommen, B John

    2010-02-01

    This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.

  12. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  13. The potential of computer software that supports the diagnosis of workplace ergonomics in shaping health awareness

    NASA Astrophysics Data System (ADS)

    Lubkowska, Wioletta

    2017-11-01

    The growing prevalence of health problems among computer workstation workers has become one of the biggest threats to the overall health of our population. That is why many modern scientists are looking for ways and methods to prevent and reverse these negative trends. The purpose of this article is to present the potential for practical use of computer programs to design an ergonomic workplace and postural loads. These programs help configure the computer workstation correctly and adopt the correct body position during work, which reduces the risk of health problems. Creating visually attractive programs helps encourage and inspire those who work with a computer to introduce ergonomic solutions and reject the sedentary lifestyle.

  14. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.

    1991-01-01

    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  15. Control system estimation and design for aerospace vehicles with time delay

    NASA Technical Reports Server (NTRS)

    Allgaier, G. R.; Williams, T. L.

    1972-01-01

    The problems of estimation and control of discrete, linear, time-varying systems are considered. Previous solutions to these problems involved either approximate techniques, open-loop control solutions, or results which required excessive computation. The estimation problem is solved by two different methods, both of which yield the identical algorithm for determining the optimal filter. The partitioned results achieve a substantial reduction in computation time and storage requirements over the expanded solution, however. The results reduce to the Kalman filter when no delays are present in the system. The control problem is also solved by two different methods, both of which yield identical algorithms for determining the optimal control gains. The stochastic control is shown to be identical to the deterministic control, thus extending the separation principle to time delay systems. The results obtained reduce to the familiar optimal control solution when no time delays are present in the system.

  16. Pilot interaction with automated airborne decision making systems

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Chu, Y. Y.; Greenstein, J. S.; Walden, R. S.

    1976-01-01

    An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered.

  17. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  18. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  19. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  20. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  1. Exploring Electronics Laboratory Experiments Using Computer Software

    ERIC Educational Resources Information Center

    Gandole, Yogendra Babarao

    2011-01-01

    The roles of teachers and students are changing, and there are undoubtedly ways of learning not yet discovered. However, the computer and software technology may provide a significant role to identify the problems, to present solutions and life-long learning. It is clear that the computer based educational technology has reached the point where…

  2. What Does Research on Computer-Based Instruction Have to Say to the Reading Teacher?

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    1987-01-01

    Examines questions typically asked about the effectiveness of computer-based reading instruction, suggesting that these questions must be refined to provide meaningful insight into the issues involved. Describes several critical problems with existing research and presents overviews of research on the effects of computer-based instruction on…

  3. Correlation between Academic and Skills-Based Tests in Computer Networks

    ERIC Educational Resources Information Center

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  4. Making Water Pollution a Problem in the Classroom Through Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Flowers, John D.

    Alternative means for dealing with water pollution control are presented for students and teachers. One computer oriented program is described in terms of teaching wastewater treatment and pollution concepts to middle and secondary school students. Suggestions are given to help teachers use a computer simulation program in their classrooms.…

  5. The Computer Industry. High Technology Industries: Profiles and Outlooks.

    ERIC Educational Resources Information Center

    International Trade Administration (DOC), Washington, DC.

    A series of meetings was held to assess future problems in United States high technology, particularly in the fields of robotics, computers, semiconductors, and telecommunications. This report, which focuses on the computer industry, includes a profile of this industry and the papers presented by industry speakers during the meetings. The profile…

  6. Computational approaches to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.

  7. Hierarchical optimization for neutron scattering problems

    DOE PAGES

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...

    2016-03-14

    In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  8. Hierarchical optimization for neutron scattering problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu

    In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  9. High order discretization techniques for real-space ab initio simulations

    NASA Astrophysics Data System (ADS)

    Anderson, Christopher R.

    2018-03-01

    In this paper, we present discretization techniques to address numerical problems that arise when constructing ab initio approximations that use real-space computational grids. We present techniques to accommodate the singular nature of idealized nuclear and idealized electronic potentials, and we demonstrate the utility of using high order accurate grid based approximations to Poisson's equation in unbounded domains. To demonstrate the accuracy of these techniques, we present results for a Full Configuration Interaction computation of the dissociation of H2 using a computed, configuration dependent, orbital basis set.

  10. The dimension split element-free Galerkin method for three-dimensional potential problems

    NASA Astrophysics Data System (ADS)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-06-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  11. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  12. Robot Geometry and the High School Curriculum.

    ERIC Educational Resources Information Center

    Meyer, Walter

    1988-01-01

    Description of the field of robotics and its possible use in high school computational geometry classes emphasizes motion planning exercises and computer graphics displays. Eleven geometrical problems based on robotics are presented along with the correct solutions and explanations. (LRW)

  13. COMPUTER TECHNOLOGY AND SOCIAL CHANGE,

    DTIC Science & Technology

    This paper presents a discussion of the social , political, economic and psychological problems associated with the rapid growth and development of...public officials and responsible groups is required to increase public understanding of the computer as a powerful tool, to select appropriate

  14. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    PubMed Central

    Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh

    2014-01-01

    This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359

  15. A vectorized Lanczos eigensolver for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1990-01-01

    The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.

  16. Accuracy of Time Integration Approaches for Stiff Magnetohydrodynamics Problems

    NASA Astrophysics Data System (ADS)

    Knoll, D. A.; Chacon, L.

    2003-10-01

    The simulation of complex physical processes with multiple time scales presents a continuing challenge to the computational plasma physisist due to the co-existence of fast and slow time scales. Within computational plasma physics, practitioners have developed and used linearized methods, semi-implicit methods, and time splitting in an attempt to tackle such problems. All of these methods are understood to generate numerical error. We are currently developing algorithms which remove such error for MHD problems [1,2]. These methods do not rely on linearization or time splitting. We are also attempting to analyze the errors introduced by existing ``implicit'' methods using modified equation analysis (MEA) [3]. In this presentation we will briefly cover the major findings in [3]. We will then extend this work further into MHD. This analysis will be augmented with numerical experiments with the hope of gaining insight, particularly into how these errors accumulate over many time steps. [1] L. Chacon,. D.A. Knoll, J.M. Finn, J. Comput. Phys., vol. 178, pp. 15-36 (2002) [2] L. Chacon and D.A. Knoll, J. Comput. Phys., vol. 188, pp. 573-592 (2003) [3] D.A. Knoll , L. Chacon, L.G. Margolin, V.A. Mousseau, J. Comput. Phys., vol. 185, pp. 583-611 (2003)

  17. Structure preserving parallel algorithms for solving the Bethe–Salpeter eigenvalue problem

    DOE PAGES

    Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; ...

    2015-10-02

    The Bethe–Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe–Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. In this paper, we establish the equivalence between Bethe–Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe–Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm–Dancoff approximation are overestimated. In order to solve large scale problemsmore » of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Finally, several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms.« less

  18. Molecular computation: RNA solutions to chess problems.

    PubMed

    Faulhammer, D; Cukras, A R; Lipton, R J; Landweber, L F

    2000-02-15

    We have expanded the field of "DNA computers" to RNA and present a general approach for the solution of satisfiability problems. As an example, we consider a variant of the "Knight problem," which asks generally what configurations of knights can one place on an n x n chess board such that no knight is attacking any other knight on the board. Using specific ribonuclease digestion to manipulate strands of a 10-bit binary RNA library, we developed a molecular algorithm and applied it to a 3 x 3 chessboard as a 9-bit instance of this problem. Here, the nine spaces on the board correspond to nine "bits" or placeholders in a combinatorial RNA library. We recovered a set of "winning" molecules that describe solutions to this problem.

  19. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  20. Specialized computer system to diagnose critical lined equipment

    NASA Astrophysics Data System (ADS)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Morozova, O. A.; Nedelkin, A. A.

    2018-05-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors propose and describe the structure of the specialized computer system to diagnose critical lined equipment. The relative results of diagnosing lining condition by the basic system and the proposed specialized computer system are presented. To automate evaluation of lining condition and support in making decisions regarding the operation mode of the lined equipment, the specialized software has been developed.

  1. Accuracy of a Computer Assisted Program for ’Classic’ Presentations of Dental Pain

    DTIC Science & Technology

    1989-04-11

    prognosis Displace/mobility of tooth, guarded prognosis Endo / perio combined problem Enamel fracture Food impaction Fractured crown, small pulp...34 presentations. Abscess/infection/cellulitis* Endo / perio combined problem Myofascial pain/muscle spasms Reversible pulpitis *For Abscess/infection...spasms Necrotizing ulcerative gingivitis Neurologic injury Osseous sequestrum Occlusal trauma Periodontal abscess Periocoronitis/erupting tooth

  2. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  3. Discrete is it enough? The revival of Piola-Hencky keynotes to analyze three-dimensional Elastica

    NASA Astrophysics Data System (ADS)

    Turco, Emilio

    2018-04-01

    Complex problems such as those concerning the mechanics of materials can be confronted only by considering numerical simulations. Analytical methods are useful to build guidelines or reference solutions but, for general cases of technical interest, they have to be solved numerically, especially in the case of large displacements and deformations. Probably continuous models arose for producing inspiring examples and stemmed from homogenization techniques. These techniques allowed for the solution of some paradigmatic examples but, in general, always require a discretization method for solving problems dictated by the applications. Therefore, and also by taking into account that computing powers are nowadays more largely available and cheap, the question arises: why not using directly a discrete model for 3D beams? In other words, it could be interesting to formulate a discrete model without using an intermediate continuum one, as this last, at the end, has to be discretized in any case. These simple considerations immediately evoke some very basic models developed many years ago when the computing powers were practically inexistent but the problem of finding simple solutions to beam deformation problem was already an emerging one. Actually, in recent years, the keynotes of Hencky and Piola attracted a renewed attention [see, one for all, the work (Turco et al. in Zeitschrift für Angewandte Mathematik und Physik 67(4):1-28, 2016)]: generalizing their results, in the present paper, a novel directly discrete three-dimensional beam model is presented and discussed, in the framework of geometrically nonlinear analysis. Using a stepwise algorithm based essentially on Newton's method to compute the extrapolations and on the Riks' arc-length method to perform the corrections, we could obtain some numerical simulations showing the computational effectiveness of presented model: Indeed, it presents a convenient balance between accuracy and computational cost.

  4. An iterative transformation procedure for numerical solution of flutter and similar characteristics-value problems

    NASA Technical Reports Server (NTRS)

    Gossard, Myron L

    1952-01-01

    An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.

  5. Relative motion using analytical differential gravity

    NASA Technical Reports Server (NTRS)

    Gottlieb, Robert G.

    1988-01-01

    This paper presents a new approach to the computation of the motion of one satellite relative to another. The trajectory of the reference satellite is computed accurately subject to geopotential perturbations. This precise trajectory is used as a reference in computing the position of a nearby body, or bodies. The problem that arises in this approach is differencing nearly equal terms in the geopotential model, especially as the separation of the reference and nearby bodies approaches zero. By developing closed form expressions for differences in higher order and degree geopotential terms, the numerical problem inherent in the differencing approach is eliminated.

  6. ELM Meets Urban Big Data Analysis: Case Studies

    PubMed Central

    Chen, Huajun; Chen, Jiaoyan

    2016-01-01

    In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203

  7. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  8. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  9. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  10. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  11. Special data base of Informational - Computational System 'INM RAS - Black Sea' for solving inverse and data assimilation problems

    NASA Astrophysics Data System (ADS)

    Zakharova, Natalia; Piskovatsky, Nicolay; Gusev, Anatoly

    2014-05-01

    Development of Informational-Computational Systems (ICS) for data assimilation procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in: mathematical modeling; theory of adjoint equations and optimal control; inverse problems; numerical methods theory; numerical algebra and scientific computing. The above problems are studied in the Institute of Numerical Mathematics of the Russian Academy of Science (INM RAS) in ICS for personal computers. In this work the results on the Special data base development for ICS "INM RAS - Black Sea" are presented. In the presentation the input information for ICS is discussed, some special data processing procedures are described. In this work the results of forecast using ICS "INM RAS - Black Sea" with operational observation data assimilation are presented. This study was supported by the Russian Foundation for Basic Research (project No 13-01-00753) and by Presidium Program of Russian Academy of Sciences (project P-23 "Black sea as an imitational ocean model"). References 1. V.I. Agoshkov, M.V. Assovskii, S.A. Lebedev, Numerical simulation of Black Sea hydrothermodynamics taking into account tide-forming forces. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 5-31. 2. E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 69-94. 3. V.B. Zalesny, N.A. Diansky, V.V. Fomin, S.N. Moshonkin, S.G. Demyshev, Numerical model of the circulation of Black Sea and Sea of Azov. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 95-111. 4. Agoshkov V.I.,Assovsky M.B., Giniatulin S. V., Zakharova N.B., Kuimov G.V., Parmuzin E.I., Fomin V.V. Informational Computational system of variational assimilation of observation data "INM RAS - Black sea"// Ecological safety of coastal and shelf zones and complex use of shelf resources: Collection of scientific works. Issue 26, Volume 2. - National Academy of Sciences of Ukraine, Marine Hydrophysical Institute, Sebastopol, 2012. Pages 352-360. (In russian)

  12. STARS: An integrated general-purpose finite element structural, aeroelastic, and aeroservoelastic analysis computer program

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.

    1991-01-01

    The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.

  13. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  14. When Children Are Concerned.

    ERIC Educational Resources Information Center

    Children & Animals, 1987

    1987-01-01

    Presents a set of teaching activities which deal with the pet overpopulation problem while improving students' persuasive writing skills. Includes activities that focus on how people can solve problems by working together. The activities range from songs to a computer simulation. (TW)

  15. A new approach to impulsive rendezvous near circular orbit

    NASA Astrophysics Data System (ADS)

    Carter, Thomas; Humi, Mayer

    2012-04-01

    A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.

  16. Space Ultrareliable Modular Computer (SUMC) instruction simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1972-01-01

    The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.

  17. Expressing clinical data sets with openEHR archetypes: a solid basis for ubiquitous computing.

    PubMed

    Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra

    2007-12-01

    The purpose of this paper is to analyse the feasibility and usefulness of expressing clinical data sets (CDSs) as openEHR archetypes. For this, we present an approach to transform CDS into archetypes, and outline typical problems with CDS and analyse whether some of these problems can be overcome by the use of archetypes. Literature review and analysis of a selection of existing Australian, German, other European and international CDSs; transfer of a CDS for Paediatric Oncology into openEHR archetypes; implementation of CDSs in application systems. To explore the feasibility of expressing CDS as archetypes an approach to transform existing CDSs into archetypes is presented in this paper. In case of the Paediatric Oncology CDS (which consists of 260 data items) this lead to the definition of 48 openEHR archetypes. To analyse the usefulness of expressing CDS as archetypes, we identified nine problems with CDS that currently remain unsolved without a common model underpinning the CDS. Typical problems include incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to most of these problems based on openEHR archetypes is motivated. With regard to integrity constraints, further research is required. While openEHR cannot overcome all barriers to Ubiquitous Computing, it can provide the common basis for ubiquitous presence of meaningful and computer-processable knowledge and information, which we believe is a basic requirement for Ubiquitous Computing. Expressing CDSs as openEHR archetypes is feasible and advantageous as it fosters semantic interoperability, supports ubiquitous computing, and helps to develop archetypes that are arguably of better quality than the original CDS.

  18. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  19. Helping Students with Emotional and Behavioral Disorders Solve Mathematics Word Problems

    ERIC Educational Resources Information Center

    Alter, Peter

    2012-01-01

    The author presents a strategy for helping students with emotional and behavioral disorders become more proficient at solving math word problems. Math word problems require students to go beyond simple computation in mathematics (e.g., adding, subtracting, multiplying, and dividing) and use higher level reasoning that includes recognizing relevant…

  20. Cognitive Load for Configuration Comprehension in Computer-Supported Geometry Problem Solving: An Eye Movement Perspective

    ERIC Educational Resources Information Center

    Lin, John Jr-Hung; Lin, Sunny S. J.

    2014-01-01

    The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations…

  1. Data management for Computer-Aided Engineering (CAE)

    NASA Technical Reports Server (NTRS)

    Bryant, W. A.; Smith, M. R.

    1984-01-01

    Analysis of data flow through the design and manufacturing processes has established specific information management requirements and identified unique problems. The application of data management technology to the engineering/manufacturing environment addresses these problems. An overview of the IPAD prototype data base management system, representing a partial solution to these problems, is presented here.

  2. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  3. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  4. Fundamental organometallic reactions: Applications on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Rappe, A. K.

    1984-01-01

    Two of the most challenging problems of Organometallic chemistry (loosely defined) are pollution control with the large space velocities needed and nitrogen fixation, a process so capably done by nature and so relatively poorly done by man (industry). For a computational chemist these problems are on the fringe of what is possible with conventional computers (large models needed and accurate energetics required). A summary of the algorithmic modification needed to address these problems on a vector processor such as the CYBER 205 and a sketch of findings to date on deNOx catalysis and nitrogen fixation are presented.

  5. Programs for Fundamentals of Chemistry.

    ERIC Educational Resources Information Center

    Gallardo, Julio; Delgado, Steven

    This document provides computer programs, written in BASIC PLUS, for presenting fundamental or remedial college chemistry students with chemical problems in a computer assisted instructional program. Programs include instructions, a sample run, and 14 separate practice sessions covering: mathematical operations, using decimals, solving…

  6. Questions and Issues in Basic Writing and Computing (Computers and Controversy).

    ERIC Educational Resources Information Center

    Gay, Pamela

    1991-01-01

    Presents findings from 18 reviewed studies with regard to attitude and the quality of writing performance. Discusses pedagogy and the problem of defining basic writers. Suggests research directions that can help move educators toward a new pedagogy. (MG)

  7. Computational Physics.

    ERIC Educational Resources Information Center

    Borcherds, P. H.

    1986-01-01

    Describes an optional course in "computational physics" offered at the University of Birmingham. Includes an introduction to numerical methods and presents exercises involving fast-Fourier transforms, non-linear least-squares, Monte Carlo methods, and the three-body problem. Recommends adding laboratory work into the course in the…

  8. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  10. Dynamically Reconfigurable Approach to Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalie M.; Lewis, Robert Michael

    2003-01-01

    The complexity and autonomy of the constituent disciplines and the diversity of the disciplinary data formats make the task of integrating simulations into a multidisciplinary design optimization problem extremely time-consuming and difficult. We propose a dynamically reconfigurable approach to MDO problem formulation wherein an appropriate implementation of the disciplinary information results in basic computational components that can be combined into different MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. We believe that this structure can and should be used to formulate and solve optimization problems in the multidisciplinary context. The present work identifies the basic computational components in several MDO problem formulations and examines the dynamically reconfigurable approach in the context of a popular class of optimization methods. We show that if the disciplinary sensitivity information is implemented in a modular fashion, the transfer of sensitivity information among the formulations under study is straightforward. This enables not only experimentation with a variety of problem formations in a research environment, but also the flexible use of formulations in a production design environment.

  11. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  12. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  13. Closed-form solutions for a class of optimal quadratic regulator problems with terminal constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Turner, J. D.; Chun, H. M.

    1984-01-01

    Closed-form solutions are derived for coupled Riccati-like matrix differential equations describing the solution of a class of optimal finite time quadratic regulator problems with terminal constraints. Analytical solutions are obtained for the feedback gains and the closed-loop response trajectory. A computational procedure is presented which introduces new variables for efficient computation of the terminal control law. Two examples are given to illustrate the validity and usefulness of the theory.

  14. Gridless particle technique for the Vlasov-Poisson system in problems with high degree of symmetry

    NASA Astrophysics Data System (ADS)

    Boella, E.; Coppa, G.; D'Angola, A.; Peiretti Paradisi, B.

    2018-03-01

    In the paper, gridless particle techniques are presented in order to solve problems involving electrostatic, collisionless plasmas. The method makes use of computational particles having the shape of spherical shells or of rings, and can be used to study cases in which the plasma has spherical or axial symmetry, respectively. As a computational grid is absent, the technique is particularly suitable when the plasma occupies a rapidly changing space region.

  15. SAGUARO: a finite-element computer program for partially saturated porous flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Gartling, D.K.; Larson, D.E.

    1983-06-01

    SAGUARO is a finite element computer program designed to calculate two-dimensional flow of mass and energy through porous media. The media may be saturated or partially saturated. SAGUARO solves the parabolic time-dependent mass transport equation which accounts for the presence of partially saturated zones through the use of highly non-linear material characteristic curves. The energy equation accounts for the possibility of partially saturated regions by adjusting the thermal capacitances and thermal conductivities according to the volume fraction of water present in the local pores. Program capabilities, user instructions and a sample problem are presented in this manual.

  16. Approximations of thermoelastic and viscoelastic control systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Liu, Z. Y.; Miller, R. E.

    1990-01-01

    Well-posed models and computational algorithms are developed and analyzed for control of a class of partial differential equations that describe the motions of thermo-viscoelastic structures. An abstract (state space) framework and a general well-posedness result are presented that can be applied to a large class of thermo-elastic and thermo-viscoelastic models. This state space framework is used in the development of a computational scheme to be used in the solution of a linear quadratic regulator (LQR) control problem. A detailed convergence proof is provided for the viscoelastic model and several numerical results are presented to illustrate the theory and to analyze problems for which the theory is incomplete.

  17. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  18. An eLearning Standard Approach for Supporting PBL in Computer Engineering

    ERIC Educational Resources Information Center

    Garcia-Robles, R.; Diaz-del-Rio, F.; Vicente-Diaz, S.; Linares-Barranco, A.

    2009-01-01

    Problem-based learning (PBL) has proved to be a highly successful pedagogical model in many fields, although it is not that common in computer engineering. PBL goes beyond the typical teaching methodology by promoting student interaction. This paper presents a PBL trial applied to a course in a computer engineering degree at the University of…

  19. On the minimum orbital intersection distance computation: a new effective method

    NASA Astrophysics Data System (ADS)

    Hedo, José M.; Ruíz, Manuel; Peláez, Jesús

    2018-06-01

    The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.

  20. Tomography in Geology: 3D Modeling and Analysis of Structural Features of Rocks Using Computed MicroTomography

    NASA Astrophysics Data System (ADS)

    Ponomarev, A. A.; Mamadaliev, R. A.; Semenova, T. V.

    2016-10-01

    The article presents a brief overview of the current state of computed tomography in the sphere of oil and gas production in Russia and in the world. Operation of computed microtomograph Skyscan 1172 is also provided, as well as personal examples of its application in solving geological problems.

  1. Computer programs for calculating two-dimensional potential flow through deflected nozzles

    NASA Technical Reports Server (NTRS)

    Hawk, J. D.; Stockman, N. O.

    1979-01-01

    Computer programs to calculate the incompressible potential flow, corrected for compressibility, in two-dimensional nozzles at arbitrary operating conditions are presented. A statement of the problem to be solved, a description of each of the computer programs, and sufficient documentation, including a test case, to enable a user to run the program are included.

  2. A Visual Tool for Computer Supported Learning: The Robot Motion Planning Example

    ERIC Educational Resources Information Center

    Elnagar, Ashraf; Lulu, Leena

    2007-01-01

    We introduce an effective computer aided learning visual tool (CALVT) to teach graph-based applications. We present the robot motion planning problem as an example of such applications. The proposed tool can be used to simulate and/or further to implement practical systems in different areas of computer science such as graphics, computational…

  3. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  4. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    ERIC Educational Resources Information Center

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  5. Computing relative plate velocities: a primer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevis, M.

    1987-08-01

    Standard models of present-day plate motions are framed in terms of rates and poles of rotation, in accordance with the well-known theorem due to Euler. This article shows how computation of relative plate velocities from such models can be viewed as a simple problem in spherical trigonometry. A FORTRAN subroutine is provided to perform the necessary computations.

  6. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  7. On some methods for improving time of reachability sets computation for the dynamic system control problem

    NASA Astrophysics Data System (ADS)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  8. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  9. Research in applied mathematics, numerical analysis, and computer science

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  10. Run-time scheduling and execution of loops on message passing machines

    NASA Technical Reports Server (NTRS)

    Crowley, Kay; Saltz, Joel; Mirchandaney, Ravi; Berryman, Harry

    1989-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  11. Run-time scheduling and execution of loops on message passing machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Crowley, Kathleen; Mirchandaney, Ravi; Berryman, Harry

    1990-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  12. A new impedance accounting for short- and long-range effects in mixed substructured formulations of nonlinear problems

    NASA Astrophysics Data System (ADS)

    Negrello, Camille; Gosselet, Pierre; Rey, Christian

    2018-05-01

    An efficient method for solving large nonlinear problems combines Newton solvers and Domain Decomposition Methods (DDM). In the DDM framework, the boundary conditions can be chosen to be primal, dual or mixed. The mixed approach presents the advantage to be eligible for the research of an optimal interface parameter (often called impedance) which can increase the convergence rate. The optimal value for this parameter is often too expensive to be computed exactly in practice: an approximate version has to be sought for, along with a compromise between efficiency and computational cost. In the context of parallel algorithms for solving nonlinear structural mechanical problems, we propose a new heuristic for the impedance which combines short and long range effects at a low computational cost.

  13. 2-dimensional implicit hydrodynamics on adaptive grids

    NASA Astrophysics Data System (ADS)

    Stökl, A.; Dorfi, E. A.

    2007-12-01

    We present a numerical scheme for two-dimensional hydrodynamics computations using a 2D adaptive grid together with an implicit discretization. The combination of these techniques has offered favorable numerical properties applicable to a variety of one-dimensional astrophysical problems which motivated us to generalize this approach for two-dimensional applications. Due to the different topological nature of 2D grids compared to 1D problems, grid adaptivity has to avoid severe grid distortions which necessitates additional smoothing parameters to be included into the formulation of a 2D adaptive grid. The concept of adaptivity is described in detail and several test computations demonstrate the effectivity of smoothing. The coupled solution of this grid equation together with the equations of hydrodynamics is illustrated by computation of a 2D shock tube problem.

  14. A computational analysis of lower bounds for the economic lot sizing problem in remanufacturing with separate setups

    NASA Astrophysics Data System (ADS)

    Aishah Syed Ali, Sharifah

    2017-09-01

    This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.

  15. A multilevel finite element method for Fredholm integral eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Xie, Hehu; Zhou, Tao

    2015-12-01

    In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.

  16. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less

  17. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  18. Computer Tutorial "Higher Mathematics" for Engineering Specialties.

    ERIC Educational Resources Information Center

    Slivina, Natalia A.; Krivosheev, Anatoly O.; Fomin, Sergey S.

    This paper presents a CD-ROM computer tutorial titled "Higher Mathematics," that contains 17 educational mathematical programs and is intended for use in Russian university engineering education. The first section introduces the courseware climate in Russia and outlines problems with commercially available universal mathematical…

  19. A System for Generating Instructional Computer Graphics.

    ERIC Educational Resources Information Center

    Nygard, Kendall E.; Ranganathan, Babusankar

    1983-01-01

    Description of the Tektronix-Based Interactive Graphics System for Instruction (TIGSI), which was developed for generating graphics displays in computer-assisted instruction materials, discusses several applications (e.g., reinforcing learning of concepts, principles, rules, and problem-solving techniques) and presents advantages of the TIGSI…

  20. Children in an Information Age.

    ERIC Educational Resources Information Center

    Sendov, Blagovest

    1988-01-01

    Summarizes the main themes and presents recommendations of the international conference, "Children in an Information Age: Tomorrow's Problems Today," that was held in Bulgaria in 1985. Topics discussed include computer training for children; the need for well designed research; the teacher-computer relationship; artificial intelligence;…

  1. Statistical methodologies for the control of dynamic remapping

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Nicol, D. M.

    1986-01-01

    Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.

  2. Bohnenblust-Hille inequalities: analytical and computational aspects.

    PubMed

    Cavalcante, Wasthenny V; Pellegrino, Daniel M

    2018-02-01

    The Bohnenblust-Hille polynomial and multilinear inequalities were proved in 1931 and the determination of exact values of their constants is still an open and challenging problem, pursued by various authors. The present paper briefly surveys recent attempts to attack/solve this problem; it also presents new results, like connections with classical results of the linear theory of absolutely summing operators, and new perspectives.

  3. A combinatorial approach to the design of vaccines.

    PubMed

    Martínez, Luis; Milanič, Martin; Legarreta, Leire; Medvedev, Paul; Malaina, Iker; de la Fuente, Ildefonso M

    2015-05-01

    We present two new problems of combinatorial optimization and discuss their applications to the computational design of vaccines. In the shortest λ-superstring problem, given a family S1,...,S(k) of strings over a finite alphabet, a set Τ of "target" strings over that alphabet, and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ target strings as substrings of S(i). In the shortest λ-cover superstring problem, given a collection X1,...,X(n) of finite sets of strings over a finite alphabet and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ elements of X(i) as substrings. The two problems are polynomially equivalent, and the shortest λ-cover superstring problem is a common generalization of two well known combinatorial optimization problems, the shortest common superstring problem and the set cover problem. We present two approaches to obtain exact or approximate solutions to the shortest λ-superstring and λ-cover superstring problems: one based on integer programming, and a hill-climbing algorithm. An application is given to the computational design of vaccines and the algorithms are applied to experimental data taken from patients infected by H5N1 and HIV-1.

  4. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  5. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  6. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  7. Tractable Experiment Design via Mathematical Surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less

  8. Algebraic model checking for Boolean gene regulatory networks.

    PubMed

    Tran, Quoc-Nam

    2011-01-01

    We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.

  9. Physically-enhanced data visualisation: towards real time solution of Partial Differential Equations in 3D domains

    NASA Astrophysics Data System (ADS)

    Zlotnik, Sergio

    2017-04-01

    Information provided by visualisation environments can be largely increased if the data shown is combined with some relevant physical processes and the used is allowed to interact with those processes. This is particularly interesting in VR environments where the user has a deep interplay with the data. For example, a geological seismic line in a 3D "cave" shows information of the geological structure of the subsoil. The available information could be enhanced with the thermal state of the region under study, with water-flow patterns in porous rocks or with rock displacements under some stress conditions. The information added by the physical processes is usually the output of some numerical technique applied to solve a Partial Differential Equation (PDE) that describes the underlying physics. Many techniques are available to obtain numerical solutions of PDE (e.g. Finite Elements, Finite Volumes, Finite Differences, etc). Although, all these traditional techniques require very large computational resources (particularly in 3D), making them useless in a real time visualization environment -such as VR- because the time required to compute a solution is measured in minutes or even in hours. We present here a novel alternative for the resolution of PDE-based problems that is able to provide a 3D solutions for a very large family of problems in real time. That is, the solution is evaluated in a one thousands of a second, making the solver ideal to be embedded into VR environments. Based on Model Order Reduction ideas, the proposed technique divides the computational work in to a computationally intensive "offline" phase, that is run only once in a life time, and an "online" phase that allow the real time evaluation of any solution within a family of problems. Preliminary examples of real time solutions of complex PDE-based problems will be presented, including thermal problems, flow problems, wave problems and some simple coupled problems.

  10. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  11. Boundary elements; Proceedings of the Fifth International Conference, Hiroshima, Japan, November 8-11, 1983

    NASA Astrophysics Data System (ADS)

    Brebbia, C. A.; Futagami, T.; Tanaka, M.

    The boundary-element method (BEM) in computational fluid and solid mechanics is examined in reviews and reports of theoretical studies and practical applications. Topics presented include the fundamental mathematical principles of BEMs, potential problems, EM-field problems, heat transfer, potential-wave problems, fluid flow, elasticity problems, fracture mechanics, plates and shells, inelastic problems, geomechanics, dynamics, industrial applications of BEMs, optimization methods based on the BEM, numerical techniques, and coupling.

  12. An evaluation of three methods of saying "no" to avoid an escalating response class hierarchy.

    PubMed

    Mace, F Charles; Pratt, Jamie L; Prager, Kevin L; Pritchard, Duncan

    2011-01-01

    We evaluated the effects of three different methods of denying access to requested high-preference activities on escalating problem behavior. Functional analysis and response class hierarchy (RCH) assessment results indicated that 4 topographies of problem behaviors displayed by a 13-year-old boy with high-functioning autism constituted an RCH maintained by positive (tangible) reinforcement. Identification of the RCH comprised the baseline phase, during which computer access was denied by saying "no" and providing an explanation for the restriction. Two alternative methods of saying "no" were then evaluated. These methods included (a) denying computer access while providing an opportunity to engage in an alternative preferred activity and (b) denying immediate computer access by arranging a contingency between completion of a low-preference task and subsequent computer access. Results indicated that a hierarchy of problem behavior may be identified in the context of denying access to a preferred activity and that it may be possible to prevent occurrences of escalating problem behavior by either presenting alternative options or arranging contingencies when saying "no" to a child's requests.

  13. Neural Information Processing in Cognition: We Start to Understand the Orchestra, but Where is the Conductor?

    PubMed Central

    Palm, Günther

    2016-01-01

    Research in neural information processing has been successful in the past, providing useful approaches both to practical problems in computer science and to computational models in neuroscience. Recent developments in the area of cognitive neuroscience present new challenges for a computational or theoretical understanding asking for neural information processing models that fulfill criteria or constraints from cognitive psychology, neuroscience and computational efficiency. The most important of these criteria for the evaluation of present and future contributions to this new emerging field are listed at the end of this article. PMID:26858632

  14. Navier-Stokes computations useful in aircraft design

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1990-01-01

    Large scale Navier-Stokes computations about aircraft components as well as reasonably complete aircraft configurations are presented and discussed. Speed and memory requirements are described for various general problem classes, which in some cases are already being used in the industrial design environment. Recent computed results, with experimental comparisons when available, are included to highlight the presentation. Finally, prospects for the future are described and recommendations for areas of concentrated research are indicated. The future of Navier-Stokes computations is seen to be rapidly expanding across a broad front of applications, which includes the entire subsonic-to-hypersonic speed regime.

  15. A Description of the Strategic Knowledge of Experts Solving Transmission Genetics Problems.

    ERIC Educational Resources Information Center

    Collins, Angelo

    Descriptions of the problem-solving strategies of experts solving realistic, computer-generated transmission genetics problems are presented in this paper and implications for instruction are discussed. Seven experts were involved in the study. All of the experts had a doctoral degree and experience in both teaching and doing research in genetics.…

  16. Organizational/Memory Tools: A Technique for Improving Problem Solving Skills.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    1986-01-01

    This study was conducted to determine whether students would use a computer-presented organizational/memory tool as an aid in problem solving, and whether and how locus of control would affect tool use and problem-solving performance. Learners did use the tools, which were most effective in the learner control with feedback condition. (MBR)

  17. The interaction between a solid body and viscous fluid by marker-and-cell method

    NASA Technical Reports Server (NTRS)

    Cheng, R. Y. K.

    1976-01-01

    A computational method for solving nonlinear problems relating to impact and penetration of a rigid body into a fluid type medium is presented. The numerical techniques, based on the Marker-and-Cell method, gives the pressure and velocity of the flow field. An important feature in this method is that the force and displacement of the rigid body interacting with the fluid during the impact and sinking phases are evaluated from the boundary stresses imposed by the fluid on the rigid body. A sample problem of low velocity penetration of a rigid block into still water is solved by this method. The computed time histories of the acceleration, pressure, and displacement of the block show food agreement with experimental measurements. A sample problem of high velocity impact of a rigid block into soft clay is also presented.

  18. Verification of floating-point software

    NASA Technical Reports Server (NTRS)

    Hoover, Doug N.

    1990-01-01

    Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.

  19. Hyperbolic heat conduction problems involving non-Fourier effects - Numerical simulations via explicit Lax-Wendroff/Taylor-Galerkin finite element formulations

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Namburu, Raju R.

    1989-01-01

    Numerical simulations are presented for hyperbolic heat-conduction problems that involve non-Fourier effects, using explicit, Lax-Wendroff/Taylor-Galerkin FEM formulations as the principal computational tool. Also employed are smoothing techniques which stabilize the numerical noise and accurately predict the propagating thermal disturbances. The accurate capture of propagating thermal disturbances at characteristic time-step values is achieved; numerical test cases are presented which validate the proposed hyperbolic heat-conduction problem concepts.

  20. Using process groups to implement failure detection in asynchronous environments

    NASA Technical Reports Server (NTRS)

    Ricciardi, Aleta M.; Birman, Kenneth P.

    1991-01-01

    Agreement on the membership of a group of processes in a distributed system is a basic problem that arises in a wide range of applications. Such groups occur when a set of processes cooperate to perform some task, share memory, monitor one another, subdivide a computation, and so forth. The group membership problems is discussed as it relates to failure detection in asynchronous, distributed systems. A rigorous, formal specification for group membership is presented under this interpretation. A solution is then presented for this problem.

  1. Computational modeling of the cell-autonomous mammalian circadian oscillator.

    PubMed

    Podkolodnaya, Olga A; Tverdokhleb, Natalya N; Podkolodnyy, Nikolay L

    2017-02-24

    This review summarizes various mathematical models of cell-autonomous mammalian circadian clock. We present the basics necessary for understanding of the cell-autonomous mammalian circadian oscillator, modern experimental data essential for its reconstruction and some special problems related to the validation of mathematical circadian oscillator models. This work compares existing mathematical models of circadian oscillator and the results of the computational studies of the oscillating systems. Finally, we discuss applications of the mathematical models of mammalian circadian oscillator for solving specific problems in circadian rhythm biology.

  2. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces

    PubMed Central

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2010-01-01

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556

  3. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces.

    PubMed

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2008-07-03

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.

  4. Cubic spline numerical solution of an ablation problem with convective backface cooling

    NASA Astrophysics Data System (ADS)

    Lin, S.; Wang, P.; Kahawita, R.

    1984-08-01

    An implicit numerical technique using cubic splines is presented for solving an ablation problem on a thin wall with convective cooling. A non-uniform computational mesh with 6 grid points has been used for the numerical integration. The method has been found to be computationally efficient, providing for the care under consideration of an overall error of about 1 percent. The results obtained indicate that the convective cooling is an important factor in reducing the ablation thickness.

  5. Streaming support for data intensive cloud-based sequence analysis.

    PubMed

    Issa, Shadi A; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of "resources-on-demand" and "pay-as-you-go", scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

  6. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.

    PubMed

    Lau, Hoi-Kwan; Plenio, Martin B

    2016-09-02

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  7. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding

    NASA Astrophysics Data System (ADS)

    Lau, Hoi-Kwan; Plenio, Martin B.

    2016-09-01

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  8. Computations of Drop Collision and Coalescence

    NASA Technical Reports Server (NTRS)

    Tryggvason, Gretar; Juric, Damir; Nas, Selman; Mortazavi, Saeed

    1996-01-01

    Computations of drops collisions, coalescence, and other problems involving drops are presented. The computations are made possible by a finite difference/front tracking technique that allows direct solutions of the Navier-Stokes equations for a multi-fluid system with complex, unsteady internal boundaries. This method has been used to examine the various collision modes for binary collisions of drops of equal size, mixing of two drops of unequal size, behavior of a suspension of drops in linear and parabolic shear flows, and the thermal migration of several drops. The key results from these simulations are reviewed. Extensions of the method to phase change problems and preliminary results for boiling are also shown.

  9. The data base management system alternative for computing in the human services.

    PubMed

    Sircar, S; Schkade, L L; Schoech, D

    1983-01-01

    The traditional incremental approach to computerization presents substantial problems as systems develop and grow. The Data Base Management System approach to computerization was developed to overcome the problems resulting from implementing computer applications one at a time. The authors describe the applications approach and the alternative Data Base Management System (DBMS) approach through their developmental history, discuss the technology of DBMS components, and consider the implications of choosing the DBMS alternative. Human service managers need an understanding of the DBMS alternative and its applicability to their agency data processing needs. The basis for a conscious selection of computing alternatives is outlined.

  10. Algorithms in nature: the convergence of systems biology and computational thinking

    PubMed Central

    Navlakha, Saket; Bar-Joseph, Ziv

    2011-01-01

    Computer science and biology have enjoyed a long and fruitful relationship for decades. Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high-level design principles of biological systems. Recently, these two directions have been converging. In this review, we argue that thinking computationally about biological processes may lead to more accurate models, which in turn can be used to improve the design of algorithms. We discuss the similar mechanisms and requirements shared by computational and biological processes and then present several recent studies that apply this joint analysis strategy to problems related to coordination, network analysis, and tracking and vision. We also discuss additional biological processes that can be studied in a similar manner and link them to potential computational problems. With the rapid accumulation of data detailing the inner workings of biological systems, we expect this direction of coupling biological and computational studies to greatly expand in the future. PMID:22068329

  11. An algorithm for a single machine scheduling problem with sequence dependent setup times and scheduling windows

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1975-01-01

    An enumeration algorithm is presented for solving a scheduling problem similar to the single machine job shop problem with sequence dependent setup times. The scheduling problem differs from the job shop problem in two ways. First, its objective is to select an optimum subset of the available tasks to be performed during a fixed period of time. Secondly, each task scheduled is constrained to occur within its particular scheduling window. The algorithm is currently being used to develop typical observational timelines for a telescope that will be operated in earth orbit. Computational times associated with timeline development are presented.

  12. A Qualitative Study of Students' Computational Thinking Skills in a Data-Driven Computing Class

    ERIC Educational Resources Information Center

    Yuen, Timothy T.; Robbins, Kay A.

    2014-01-01

    Critical thinking, problem solving, the use of tools, and the ability to consume and analyze information are important skills for the 21st century workforce. This article presents a qualitative case study that follows five undergraduate biology majors in a computer science course (CS0). This CS0 course teaches programming within a data-driven…

  13. A Drawing and Multi-Representational Computer Environment for Beginners' Learning of Programming Using C: Design and Pilot Formative Evaluation

    ERIC Educational Resources Information Center

    Kordaki, Maria

    2010-01-01

    This paper presents both the design and the pilot formative evaluation study of a computer-based problem-solving environment (named LECGO: Learning Environment for programming using C using Geometrical Objects) for the learning of computer programming using C by beginners. In its design, constructivist and social learning theories were taken into…

  14. Numerical simulation using vorticity-vector potential formulation

    NASA Technical Reports Server (NTRS)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.

  15. Unary probabilistic and quantum automata on promise problems

    NASA Astrophysics Data System (ADS)

    Gainutdinova, Aida; Yakaryılmaz, Abuzer

    2018-02-01

    We continue the systematic investigation of probabilistic and quantum finite automata (PFAs and QFAs) on promise problems by focusing on unary languages. We show that bounded-error unary QFAs are more powerful than bounded-error unary PFAs, and, contrary to the binary language case, the computational power of Las-Vegas QFAs and bounded-error PFAs is equivalent to the computational power of deterministic finite automata (DFAs). Then, we present a new family of unary promise problems defined with two parameters such that when fixing one parameter QFAs can be exponentially more succinct than PFAs and when fixing the other parameter PFAs can be exponentially more succinct than DFAs.

  16. Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Jendryscik, Radek

    2017-11-01

    The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.

  17. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moryakov, A. V., E-mail: sailor@orc.ru

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  18. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  19. Software environment for implementing engineering applications on MIMD computers

    NASA Technical Reports Server (NTRS)

    Lopez, L. A.; Valimohamed, K. A.; Schiff, S.

    1990-01-01

    In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.

  20. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    NASA Astrophysics Data System (ADS)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  1. Space station data management system - A common GSE test interface for systems testing and verification

    NASA Technical Reports Server (NTRS)

    Martinez, Pedro A.; Dunn, Kevin W.

    1987-01-01

    This paper examines the fundamental problems and goals associated with test, verification, and flight-certification of man-rated distributed data systems. First, a summary of the characteristics of modern computer systems that affect the testing process is provided. Then, verification requirements are expressed in terms of an overall test philosophy for distributed computer systems. This test philosophy stems from previous experience that was gained with centralized systems (Apollo and the Space Shuttle), and deals directly with the new problems that verification of distributed systems may present. Finally, a description of potential hardware and software tools to help solve these problems is provided.

  2. Data-driven indexing mechanism for the recognition of polyhedral objects

    NASA Astrophysics Data System (ADS)

    McLean, Stewart; Horan, Peter; Caelli, Terry M.

    1992-02-01

    This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.

  3. On Target Localization Using Combined RSS and AoA Measurements

    PubMed Central

    Beko, Marko; Dinis, Rui

    2018-01-01

    This work revises existing solutions for a problem of target localization in wireless sensor networks (WSNs), utilizing integrated measurements, namely received signal strength (RSS) and angle of arrival (AoA). The problem of RSS/AoA-based target localization became very popular in the research community recently, owing to its great applicability potential and relatively low implementation cost. Therefore, here, a comprehensive study of the state-of-the-art (SoA) solutions and their detailed analysis is presented. The beginning of this work starts by considering the SoA approaches based on convex relaxation techniques (more computationally complex in general), and it goes through other (less computationally complex) approaches, as well, such as the ones based on the generalized trust region sub-problems framework and linear least squares. Furthermore, a detailed analysis of the computational complexity of each solution is reviewed. Furthermore, an extensive set of simulation results is presented. Finally, the main conclusions are summarized, and a set of future aspects and trends that might be interesting for future research in this area is identified. PMID:29671832

  4. Global Optimal Trajectory in Chaos and NP-Hardness

    NASA Astrophysics Data System (ADS)

    Latorre, Vittorio; Gao, David Yang

    This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.

  5. Solving LP Relaxations of Large-Scale Precedence Constrained Problems

    NASA Astrophysics Data System (ADS)

    Bienstock, Daniel; Zuckerberg, Mark

    We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.

  6. Wave envelope technique for multimode wave guide problems

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Sudharsanan, S. I.

    1986-01-01

    A fast method for solving wave guide problems is proposed. In particular, the guide is considered to be inhomogeneous allowing propagation of waves of higher order modes. Such problems have been handled successfully for acoustic wave propagation problems with single mode and finite length. This paper extends this concept to electromagnetic wave guides with several modes and infinite length. The method is described and results of computations are presented.

  7. A Program for Automatic Generation of Dimensionless Parameters.

    ERIC Educational Resources Information Center

    Hundal, M. S.

    1982-01-01

    Following a review of the theory of dimensional analysis, presents a method for generating all of the possible sets of nondimensional parameters for a given problem, a digital computer program to implement the method, and a mechanical design problem to illustrate its use. (Author/JN)

  8. Effective dimensional reduction algorithm for eigenvalue problems for thin elastic structures: A paradigm in three dimensions

    PubMed Central

    Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.

    2000-01-01

    We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469

  9. Investigation and appreciation of optimal output feedback. Volume 1: A convergent algorithm for the stochastic infinite-time discrete optimal output feedback problem

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.

  10. Aerobraking strategies for the sample of comet coma earth return mission

    NASA Astrophysics Data System (ADS)

    Abe, Takashi; Kawaguchi, Jun'ichiro; Uesugi, Kuninori; Yen, Chen-Wan L.

    The results of a study to the validate the applicability of the aerobraking concept to the SOCCER (sample of comet coma earth return) mission using a six-DOF computer simulation of the aerobraking process are presented. The SOCCER spacecraft and the aerobraking scenario and power supply problem are briefly described. Results are presented for the spin effect, payload exposure problem, and sun angle effect.

  11. Aerobraking strategies for the sample of comet coma earth return mission

    NASA Technical Reports Server (NTRS)

    Abe, Takashi; Kawaguchi, Jun'ichiro; Uesugi, Kuninori; Yen, Chen-Wan L.

    1990-01-01

    The results of a study to the validate the applicability of the aerobraking concept to the SOCCER (sample of comet coma earth return) mission using a six-DOF computer simulation of the aerobraking process are presented. The SOCCER spacecraft and the aerobraking scenario and power supply problem are briefly described. Results are presented for the spin effect, payload exposure problem, and sun angle effect.

  12. Optimization of computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikhalevich, V.S.; Sergienko, I.V.; Zadiraka, V.K.

    1994-11-01

    This article examines some topics of optimization of computations, which have been discussed at 25 seminar-schools and symposia organized by the V.M. Glushkov Institute of Cybernetics of the Ukrainian Academy of Sciences since 1969. We describe the main directions in the development of computational mathematics and present some of our own results that reflect a certain design conception of speed-optimal and accuracy-optimal (or nearly optimal) algorithms for various classes of problems, as well as a certain approach to optimization of computer computations.

  13. Hybrid computer optimization of systems with random parameters

    NASA Technical Reports Server (NTRS)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  14. Statistical energy analysis computer program, user's guide

    NASA Technical Reports Server (NTRS)

    Trudell, R. W.; Yano, L. I.

    1981-01-01

    A high frequency random vibration analysis, (statistical energy analysis (SEA) method) is examined. The SEA method accomplishes high frequency prediction of arbitrary structural configurations. A general SEA computer program is described. A summary of SEA theory, example problems of SEA program application, and complete program listing are presented.

  15. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  16. Teaching Cardiovascular Integrations with Computer Laboratories.

    ERIC Educational Resources Information Center

    Peterson, Nils S.; Campbell, Kenneth B.

    1985-01-01

    Describes a computer-based instructional unit in cardiovascular physiology. The program (which employs simulated laboratory experimental techniques with a problem-solving format is designed to supplement an animal laboratory and to offer students an integrative approach to physiology through use of microcomputers. Also presents an overview of the…

  17. Parallelized modelling and solution scheme for hierarchically scaled simulations

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1995-01-01

    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.

  18. A centralized audio presentation manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less

  19. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  20. The Ordered Clustered Travelling Salesman Problem: A Hybrid Genetic Algorithm

    PubMed Central

    Ahmed, Zakir Hussain

    2014-01-01

    The ordered clustered travelling salesman problem is a variation of the usual travelling salesman problem in which a set of vertices (except the starting vertex) of the network is divided into some prespecified clusters. The objective is to find the least cost Hamiltonian tour in which vertices of any cluster are visited contiguously and the clusters are visited in the prespecified order. The problem is NP-hard, and it arises in practical transportation and sequencing problems. This paper develops a hybrid genetic algorithm using sequential constructive crossover, 2-opt search, and a local search for obtaining heuristic solution to the problem. The efficiency of the algorithm has been examined against two existing algorithms for some asymmetric and symmetric TSPLIB instances of various sizes. The computational results show that the proposed algorithm is very effective in terms of solution quality and computational time. Finally, we present solution to some more symmetric TSPLIB instances. PMID:24701148

  1. An improved V-Lambda solution of the matrix Riccati equation

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Markley, F. Landis

    1988-01-01

    The authors present an improved algorithm for computing the V-Lambda solution of the matrix Riccati equation. The improvement is in the reduction of the computational load, results from the orthogonality of the eigenvector matrix that has to be solved for. The orthogonality constraint reduces the number of independent parameters which define the matrix from n-squared to n (n - 1)/2. The authors show how to specify the parameters, how to solve for them and how to form from them the needed eigenvector matrix. In the search for suitable parameters, the analogy between the present problem and the problem of attitude determination is exploited, resulting in the choice of Rodrigues parameters.

  2. Use of a personal computer for dynamical engineering illustrations in a classroom and over an instructional TV network

    NASA Technical Reports Server (NTRS)

    Watson, V. R.

    1983-01-01

    A personal computer has been used to illustrate physical phenomena and problem solution techniques in engineering classes. According to student evaluations, instruction of concepts was greatly improved through the use of these illustrations. This paper describes the class of phenomena that can be effectively illustrated, the techniques used to create these illustrations, and the techniques used to display the illustrations in regular classrooms and over an instructional TV network. The features of a personal computer required to apply these techniques are listed. The capabilities of some present personal computers are discussed and a forecast of the capabilities of future personal computers is presented.

  3. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  4. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  5. Seeing around a Ball: Complex, Technology-Based Problems in Calculus with Applications in Science and Engineering-Redux

    ERIC Educational Resources Information Center

    Winkel, Brian

    2008-01-01

    A complex technology-based problem in visualization and computation for students in calculus is presented. Strategies are shown for its solution and the opportunities for students to put together sequences of concepts and skills to build for success are highlighted. The problem itself involves placing an object under water in order to actually see…

  6. Using Educational Data Mining Methods to Assess Field-Dependent and Field-Independent Learners' Complex Problem Solving

    ERIC Educational Resources Information Center

    Angeli, Charoula; Valanides, Nicos

    2013-01-01

    The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…

  7. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  8. Improve Problem Solving Skills through Adapting Programming Tools

    NASA Technical Reports Server (NTRS)

    Shaykhian, Linda H.; Shaykhian, Gholam Ali

    2007-01-01

    There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.

  9. [A survey of the best bibliographic searching system in occupational medicine and discussion of its implementation].

    PubMed

    Inoue, J

    1991-12-01

    When occupational health personnel, especially occupational physicians search bibliographies, they usually have to search bibliographies by themselves. Also, if a library is not available because of the location of their work place, they might have to rely on online databases. Although there are many commercial databases in the world, people who seldom use them, will have problems with on-line searching, such as user-computer interface, keywords, and so on. The present study surveyed the best bibliographic searching system in the field of occupational medicine by questionnaire through the use of DIALOG OnDisc MEDLINE as a commercial database. In order to ascertain the problems involved in determining the best bibliographic searching system, a prototype bibliographic searching system was constructed and then evaluated. Finally, solutions for the problems were discussed. These led to the following conclusions: to construct the best bibliographic searching system at the present time, 1) a concept of micro-to-mainframe links (MML) is needed for the computer hardware network; 2) multi-lingual font standards and an excellent common user-computer interface are needed for the computer software; 3) a short course and education of database management systems, and support of personal information processing for retrieved data are necessary for the practical use of the system.

  10. Proceedings of the Toronto TEAM/ACES workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, L.R.

    The third TEAM Workshop of the third round was held at Ontario Hydro in Toronto 25--26 October 1990, immediately following the Conference on Electromagnetic Field Computation. This was the first Joint Workshop with ACES (Applied Computational Electromagnetics Society), whose goals are similar to TEAM, but who tend to work at higher frequencies (Antennas, Propagation, and Scattering). A fusion problem, the eddy current heating of the case of the Euratom Large Coil Project Coil, was adapted as Problem 14 at the Oxford Workshop, and a solution to that problem was presented at Toronto by Oskar Biro of the Graz (Austria) Universitymore » of Technology. Individual solutions were also presented for Problems 8 (Flaw in a Plate) and 9 (Moving Coil inside a Pipe). Five new solutions were presented to Problem 13 (DC Coil in a Ferromagnetic Yoke), and Koji Fujiwara of Okayama University summarized these solutions along with the similar number presented at Oxford. The solutions agreed well in the air but disagreed in the steel. Codes with a formulation in magnetic field strength or scalar potential underestimated the flux density in the steel, and codes based on flux density or vector potential overestimated it. Codes with edge elements appeared to do better than codes with nodal elements. These results stimulated considerable discussions; in my view that was the most valuable result of the workshop.« less

  11. Intelligent supercomputers: the Japanese computer sputnik

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  12. Flood inundation extent mapping based on block compressed tracing

    NASA Astrophysics Data System (ADS)

    Shen, Dingtao; Rui, Yikang; Wang, Jiechen; Zhang, Yu; Cheng, Liang

    2015-07-01

    Flood inundation extent, depth, and duration are important factors affecting flood hazard evaluation. At present, flood inundation analysis is based mainly on a seeded region-growing algorithm, which is an inefficient process because it requires excessive recursive computations and it is incapable of processing massive datasets. To address this problem, we propose a block compressed tracing algorithm for mapping the flood inundation extent, which reads the DEM data in blocks before transferring them to raster compression storage. This allows a smaller computer memory to process a larger amount of data, which solves the problem of the regular seeded region-growing algorithm. In addition, the use of a raster boundary tracing technique allows the algorithm to avoid the time-consuming computations required by the seeded region-growing. Finally, we conduct a comparative evaluation in the Chin-sha River basin, results show that the proposed method solves the problem of flood inundation extent mapping based on massive DEM datasets with higher computational efficiency than the original method, which makes it suitable for practical applications.

  13. MSFC crack growth analysis computer program, version 2 (users manual)

    NASA Technical Reports Server (NTRS)

    Creager, M.

    1976-01-01

    An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.

  14. On Computations of Duct Acoustics with Near Cut-Off Frequency

    NASA Technical Reports Server (NTRS)

    Dong, Thomas Z.; Povinelli, Louis A.

    1997-01-01

    The cut-off is a unique feature associated with duct acoustics due to the presence of duct walls. A study of this cut-off effect on the computations of duct acoustics is performed in the present work. The results show that the computation of duct acoustic modes near cut-off requires higher numerical resolutions than others to avoid being numerically cut off. Duct acoustic problems in Category 2 are solved by the DRP finite difference scheme with the selective artificial damping method and results are presented and compared to reference solutions.

  15. Robust computation of dipole electromagnetic fields in arbitrarily anisotropic, planar-stratified environments.

    PubMed

    Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay

    2014-01-01

    We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.

  16. A novel neural network for the synthesis of antennas and microwave devices.

    PubMed

    Delgado, Heriberto Jose; Thursby, Michael H; Ham, Fredric M

    2005-11-01

    A novel artificial neural network (SYNTHESIS-ANN) is presented, which has been designed for computationally intensive problems and applied to the optimization of antennas and microwave devices. The antenna example presented is optimized with respect to voltage standing-wave ratio, bandwidth, and frequency of operation. A simple microstrip transmission line problem is used to further describe the ANN effectiveness, in which microstrip line width is optimized with respect to line impedance. The ANNs exploit a unique number representation of input and output data in conjunction with a more standard neural network architecture. An ANN consisting of a heteroassociative memory provided a very efficient method of computing necessary geometrical values for the antenna when used in conjunction with a new randomization process. The number representation used provides significant insight into this new method of fault-tolerant computing. Further work is needed to evaluate the potential of this new paradigm.

  17. Quantum communication and information processing

    NASA Astrophysics Data System (ADS)

    Beals, Travis Roland

    Quantum computers enable dramatically more efficient algorithms for solving certain classes of computational problems, but, in doing so, they create new problems. In particular, Shor's Algorithm allows for efficient cryptanalysis of many public-key cryptosystems. As public key cryptography is a critical component of present-day electronic commerce, it is crucial that a working, secure replacement be found. Quantum key distribution (QKD), first developed by C.H. Bennett and G. Brassard, offers a partial solution, but many challenges remain, both in terms of hardware limitations and in designing cryptographic protocols for a viable large-scale quantum communication infrastructure. In Part I, I investigate optical lattice-based approaches to quantum information processing. I look at details of a proposal for an optical lattice-based quantum computer, which could potentially be used for both quantum communications and for more sophisticated quantum information processing. In Part III, I propose a method for converting and storing photonic quantum bits in the internal state of periodically-spaced neutral atoms by generating and manipulating a photonic band gap and associated defect states. In Part II, I present a cryptographic protocol which allows for the extension of present-day QKD networks over much longer distances without the development of new hardware. I also present a second, related protocol which effectively solves the authentication problem faced by a large QKD network, thus making QKD a viable, information-theoretic secure replacement for public key cryptosystems.

  18. Dynamic graph cuts for efficient inference in Markov Random Fields.

    PubMed

    Kohli, Pushmeet; Torr, Philip H S

    2007-12-01

    Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.

  19. A review on recent contribution of meshfree methods to structure and fracture mechanics applications.

    PubMed

    Daxini, S D; Prajapati, J M

    2014-01-01

    Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.

  20. An evaluation of exact methods for the multiple subset maximum cardinality selection problem.

    PubMed

    Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas

    2016-05-01

    The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.

  1. Numerical simulation of liquid jet impact on a rigid wall

    NASA Astrophysics Data System (ADS)

    Aganin, A. A.; Guseva, T. S.

    2016-11-01

    Basic points of a numerical technique for computing high-speed liquid jet impact on a rigid wall are presented. In the technique the flows of the liquid and the surrounding gas are governed by the equations of gas dynamics in the density, velocity, and pressure, which are integrated by the CIP-CUP method on dynamically adaptive grids without explicitly tracking the gas-liquid interface. The efficiency of the technique is demonstrated by the results of computing the problems of impact of the liquid cone and the liquid wedge on a wall in the mode with the shockwave touching the wall by its edge. Numerical solutions of these problems are compared with the analytical solution of the problem of impact of the plane liquid flow on a wall. Applicability of the technique to the problems of the high-speed liquid jet impact on a wall is illustrated by the results of computing a problem of impact of a cylindrical liquid jet with the hemispherical end on a wall covered by a layer of the same liquid.

  2. Resource-Competing Oscillator Network as a Model of Amoeba-Based Neurocomputer

    NASA Astrophysics Data System (ADS)

    Aono, Masashi; Hirata, Yoshito; Hara, Masahiko; Aihara, Kazuyuki

    An amoeboid organism, Physarum, exhibits rich spatiotemporal oscillatory behavior and various computational capabilities. Previously, the authors created a recurrent neurocomputer incorporating the amoeba as a computing substrate to solve optimization problems. In this paper, considering the amoeba to be a network of oscillators coupled such that they compete for constant amounts of resources, we present a model of the amoeba-based neurocomputer. The model generates a number of oscillation modes and produces not only simple behavior to stabilize a single mode but also complex behavior to spontaneously switch among different modes, which reproduces well the experimentally observed behavior of the amoeba. To explore the significance of the complex behavior, we set a test problem used to compare computational performances of the oscillation modes. The problem is a kind of optimization problem of how to allocate a limited amount of resource to oscillators such that conflicts among them can be minimized. We show that the complex behavior enables to attain a wider variety of solutions to the problem and produces better performances compared with the simple behavior.

  3. Reliability enhancement of Navier-Stokes codes through convergence acceleration

    NASA Technical Reports Server (NTRS)

    Merkle, Charles L.; Dulikravich, George S.

    1995-01-01

    Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.

  4. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  5. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less

  6. Solution of large nonlinear quasistatic structural mechanics problems on distributed-memory multiprocessor computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanford, M.

    1997-12-31

    Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less

  7. Numerical formulation for the prediction of solid/liquid change of a binary alloy

    NASA Technical Reports Server (NTRS)

    Schneider, G. E.; Tiwari, S. N.

    1990-01-01

    A computational model is presented for the prediction of solid/liquid phase change energy transport including the influence of free convection fluid flow in the liquid phase region. The computational model considers the velocity components of all non-liquid phase change material control volumes to be zero but fully solves the coupled mass-momentum problem within the liquid region. The thermal energy model includes the entire domain and uses an enthalpy like model and a recently developed method for handling the phase change interface nonlinearity. Convergence studies are performed and comparisons made with experimental data for two different problem specifications. The convergence studies indicate that grid independence was achieved and the comparison with experimental data indicates excellent quantitative prediction of the melt fraction evolution. Qualitative data is also provided in the form of velocity vector diagrams and isotherm plots for selected times in the evolution of both problems. The computational costs incurred are quite low by comparison with previous efforts on solving these problems.

  8. Simple and complex mental subtraction: strategy choice and speed-of-processing differences in younger and older adults.

    PubMed

    Geary, D C; Frensch, P A; Wiley, J G

    1993-06-01

    Thirty-six younger adults (10 male, 26 female; ages 18 to 38 years) and 36 older adults (14 male, 22 female; ages 61 to 80 years) completed simple and complex paper-and-pencil subtraction tests and solved a series of simple and complex computer-presented subtraction problems. For the computer task, strategies and solution times were recorded on a trial-by-trial basis. Older Ss used a developmentally more mature mix of problem-solving strategies to solve both simple and complex subtraction problems. Analyses of component scores derived from the solution times suggest that the older Ss are slower at number encoding and number production but faster at executing the borrow procedure. In contrast, groups did not appear to differ in the speed of subtraction fact retrieval. Results from a computational simulation are consistent with the interpretation that older adults' advantage for strategy choices and for the speed of executing the borrow procedure might result from more practice solving subtraction problems.

  9. The sound of moving bodies. Ph.D. Thesis - Cambridge Univ.

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth Steven

    1990-01-01

    The importance of the quadrupole source term in the Ffowcs, Williams, and Hawkings (FWH) equation was addressed. The quadrupole source contains fundamental components of the complete fluid mechanics problem, which are ignored only at the risk of error. The results made it clear that any application of the acoustic analogy should begin with all of the source terms in the FWH theory. The direct calculation of the acoustic field as part of the complete unsteady fluid mechanics problem using CFD is considered. It was shown that aeroelastic calculation can indeed be made with CFD codes. The results indicate that the acoustic field is the most susceptible component of the computation to numerical error. Therefore, the ability to measure the damping of acoustic waves is absolutely essential both to develop acoustic computations. Essential groundwork for a new approach to the problem of sound generation by moving bodies is presented. This new computational acoustic approach holds the promise of solving many problems hitherto pushed aside.

  10. Computational Analysis of a Prototype Martian Rotorcraft Experiment

    NASA Technical Reports Server (NTRS)

    Corfeld, Kelly J.; Strawn, Roger C.; Long, Lyle N.

    2002-01-01

    This paper presents Reynolds-averaged Navier-Stokes calculations for a prototype Martian rotorcraft. The computations are intended for comparison with an ongoing Mars rotor hover test at NASA Ames Research Center. These computational simulations present a new and challenging problem, since rotors that operate on Mars will experience a unique low Reynolds number and high Mach number environment. Computed results for the 3-D rotor differ substantially from 2-D sectional computations in that the 3-D results exhibit a stall delay phenomenon caused by rotational forces along the blade span. Computational results have yet to be compared to experimental data, but computed performance predictions match the experimental design goals fairly well. In addition, the computed results provide a high level of detail in the rotor wake and blade surface aerodynamics. These details provide an important supplement to the expected experimental performance data.

  11. MacDoctor: The Macintosh diagnoser

    NASA Technical Reports Server (NTRS)

    Lavery, David B.; Brooks, William D.

    1990-01-01

    When the Macintosh computer was first released, the primary user was a computer hobbyist who typically had a significant technical background and was highly motivated to understand the internal structure and operational intricacies of the computer. In recent years the Macintosh computer has become a widely-accepted general purpose computer which is being used by an ever-increasing non-technical audience. This has lead to a large base of users which has neither the interest nor the background to understand what is happening 'behind the scenes' when the Macintosh is put to use - or what should be happening when something goes wrong. Additionally, the Macintosh itself has evolved from a simple closed design to a complete family of processor platforms and peripherals with a tremendous number of possible configurations. With the increasing popularity of the Macintosh series, software and hardware developers are producing a product for every user's need. As the complexity of configuration possibilities grows, the need for experienced or even expert knowledge is required to diagnose problems. This presents a problem to uneducated or casual users. This problem indicates a new Macintosh consumer need; that is, a diagnostic tool able to determine the problem for the user. As the volume of Macintosh products has increased, this need has also increased.

  12. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  13. A mixed analog/digital chaotic neuro-computer system for quadratic assignment problems.

    PubMed

    Horio, Yoshihiko; Ikeguchi, Tohru; Aihara, Kazuyuki

    2005-01-01

    We construct a mixed analog/digital chaotic neuro-computer prototype system for quadratic assignment problems (QAPs). The QAP is one of the difficult NP-hard problems, and includes several real-world applications. Chaotic neural networks have been used to solve combinatorial optimization problems through chaotic search dynamics, which efficiently searches optimal or near optimal solutions. However, preliminary experiments have shown that, although it obtained good feasible solutions, the Hopfield-type chaotic neuro-computer hardware system could not obtain the optimal solution of the QAP. Therefore, in the present study, we improve the system performance by adopting a solution construction method, which constructs a feasible solution using the analog internal state values of the chaotic neurons at each iteration. In order to include the construction method into our hardware, we install a multi-channel analog-to-digital conversion system to observe the internal states of the chaotic neurons. We show experimentally that a great improvement in the system performance over the original Hopfield-type chaotic neuro-computer is obtained. That is, we obtain the optimal solution for the size-10 QAP in less than 1000 iterations. In addition, we propose a guideline for parameter tuning of the chaotic neuro-computer system according to the observation of the internal states of several chaotic neurons in the network.

  14. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  15. A Man-Machine System for Contemporary Counseling Practice: Diagnosis and Prediction.

    ERIC Educational Resources Information Center

    Roach, Arthur J.

    This paper looks at present and future capabilities for diagnosis and prediction in computer-based guidance efforts and reviews the problems and potentials which will accompany the implementation of such capabilities. In addition to necessary procedural refinement in prediction, future developments in computer-based educational and career…

  16. Repetitive Domain-Referenced Testing Using Computers: the TITA System.

    ERIC Educational Resources Information Center

    Olympia, P. L., Jr.

    The TITA (Totally Interactive Testing and Analysis) System algorithm for the repetitive construction of domain-referenced tests utilizes a compact data bank, is highly portable, is useful in any discipline, requires modest computer hardware, and does not present a security problem. Clusters of related keyphrases, statement phrases, and distractors…

  17. Conversion of PCDP Dialogs.

    ERIC Educational Resources Information Center

    Bork, Alfred M.

    An introduction to the problems involved in conversion of computer dialogues from one computer language to another is presented. Conversion of individual dialogues by complete rewriting is straightforward, if tedious. To make a general conversion of a large group of heterogeneous dialogue material from one language to another at one step is more…

  18. KAPEAN: Understanding Affective States of Children with ADHD

    ERIC Educational Resources Information Center

    Martínez, Fernando; Barraza, Claudia; González, Nimrod; González, Juan

    2016-01-01

    Affective computing seeks to create computational systems that adapt content and resources according to the affective states of the users. However, the detection of the user's affection such as motivation and emotion is challenging especially when an attention problem is present. An approach to convey learning resources to children with learning…

  19. Performance bounds for nonlinear systems with a nonlinear ℒ2-gain property

    NASA Astrophysics Data System (ADS)

    Zhang, Huan; Dower, Peter M.

    2012-09-01

    Nonlinear ℒ2-gain is a finite gain concept that generalises the notion of conventional (linear) finite ℒ2-gain to admit the application of ℒ2-gain analysis tools of a broader class of nonlinear systems. The computation of tight comparison function bounds for this nonlinear ℒ2-gain property is important in applications such as small gain design. This article presents an approximation framework for these comparison function bounds through the formulation and solution of an optimal control problem. Key to the solution of this problem is the lifting of an ℒ2-norm input constraint, which is facilitated via the introduction of an energy saturation operator. This admits the solution of the optimal control problem of interest via dynamic programming and associated numerical methods, leading to the computation of the proposed bounds. Two examples are presented to demonstrate this approach.

  20. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less

  1. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less

  2. Quantum Optical Implementations of Current Quantum Computing Paradigms

    DTIC Science & Technology

    2005-05-01

    Conferences and Proceedings: The results were presented at several conferences. These include: 1. M. O. Scully, " Foundations of Quantum Mechanics ", in...applications have revealed a strong connection between the fundamental aspects of quantum mechanics that governs physical systems and the informational...could be solved in polynomial time using quantum computers. Another set of problems where quantum mechanics can carry out computations substantially

  3. A Research Study of Computer-Based Tutoring of Mathematical and Scientific Knowledge. Final Technical Report.

    ERIC Educational Resources Information Center

    Goldstein, Ira

    Computer coaching of students as an aid in problem-solving instruction is discussed. This report describes an advanced form of computer-assisted instruction that must not only present the material to be taught, but also analyze the student's responses. The program must decide whether to intervene and how much to say to a pupil based on its…

  4. Transport synthetic acceleration for long-characteristics assembly-level transport problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The authors apply the transport synthetic acceleration (TSA) scheme to the long-characteristics spatial discretization for the two-dimensional assembly-level transport problem. This synthetic method employs a simplified transport operator as its low-order approximation. Thus, in the acceleration step, the authors take advantage of features of the long-characteristics discretization that make it particularly well suited to assembly-level transport problems. The main contribution is to address difficulties unique to the long-characteristics discretization and produce a computationally efficient acceleration scheme. The combination of the long-characteristics discretization, opposing reflecting boundary conditions (which are present in assembly-level transport problems), and TSA presents several challenges. The authorsmore » devise methods for overcoming each of them in a computationally efficient way. Since the boundary angular data exist on different grids in the high- and low-order problems, they define restriction and prolongation operations specific to the method of long characteristics to map between the two grids. They implement the conjugate gradient (CG) method in the presence of opposing reflection boundary conditions to solve the TSA low-order equations. The CG iteration may be applied only to symmetric positive definite (SPD) matrices; they prove that the long-characteristics discretization yields an SPD matrix. They present results of the acceleration scheme on a simple test problem, a typical pressurized water reactor assembly, and a typical boiling water reactor assembly.« less

  5. Rapid optimization of multiple-burn rocket flights.

    NASA Technical Reports Server (NTRS)

    Brown, K. R.; Harrold, E. F.; Johnson, G. W.

    1972-01-01

    Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.

  6. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  7. Manage Your Life Online (MYLO): a pilot trial of a conversational computer-based intervention for problem solving in a student sample.

    PubMed

    Gaffney, Hannah; Mansell, Warren; Edwards, Rachel; Wright, Jason

    2014-11-01

    Computerized self-help that has an interactive, conversational format holds several advantages, such as flexibility across presenting problems and ease of use. We designed a new program called MYLO that utilizes the principles of METHOD of Levels (MOL) therapy--based upon Perceptual Control Theory (PCT). We tested the efficacy of MYLO, tested whether the psychological change mechanisms described by PCT mediated its efficacy, and evaluated effects of client expectancy. Forty-eight student participants were randomly assigned to MYLO or a comparison program ELIZA. Participants discussed a problem they were currently experiencing with their assigned program and completed measures of distress, resolution and expectancy preintervention, postintervention and at 2-week follow-up. MYLO and ELIZA were associated with reductions in distress, depression, anxiety and stress. MYLO was considered more helpful and led to greater problem resolution. The psychological change processes predicted higher ratings of MYLO's helpfulness and reductions in distress. Positive expectancies towards computer-based problem solving correlated with MYLO's perceived helpfulness and greater problem resolution, and this was partly mediated by the psychological change processes identified. The findings provide provisional support for the acceptability of the MYLO program in a non-clinical sample although its efficacy as an innovative computer-based aid to problem solving remains unclear. Nevertheless, the findings provide tentative early support for the mechanisms of psychological change identified within PCT and highlight the importance of client expectations on predicting engagement in computer-based self-help.

  8. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  9. WE-D-303-00: Computational Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA

    2015-06-15

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less

  10. Dust in the Primary Classroom.

    ERIC Educational Resources Information Center

    Pritchard, Alan

    1990-01-01

    Described is the use of a commercial computer software package, "Dust," to enhance mathematical learning in the classroom. Samples of mathematics problems presented in this game which is a simulation of an adventure in outer space are presented. (CW)

  11. Using Computers in Fluids Engineering Education

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1998-01-01

    Three approaches for using computers to improve basic fluids engineering education are presented. The use of computational fluid dynamics solutions to fundamental flow problems is discussed. The use of interactive, highly graphical software which operates on either a modern workstation or personal computer is highlighted. And finally, the development of 'textbooks' and teaching aids which are used and distributed on the World Wide Web is described. Arguments for and against this technology as applied to undergraduate education are also discussed.

  12. Fast Solution in Sparse LDA for Binary Classification

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback

    2010-01-01

    An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic form along with the inherent sequential nature of greedy search itself. Together this enables the use of highly-efficient partitioned-matrix-inverse techniques that result in large speedups of computation in both the forward-selection and backward-elimination stages of greedy algorithms in general.

  13. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  14. A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.

    PubMed

    Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas

    2015-12-01

    Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.

  15. Recent work on airfoil theory

    NASA Technical Reports Server (NTRS)

    Prandtl, L

    1940-01-01

    The basic ideas of a new method for treating the problem of the airfoil are presented, and a review is given of the problems thus far computed for incompressible and supersonic flows. Test results are reported for the airfoil of circular plan form and the results are shown to agree well with the theory. As a supplement, a theory based on the older methods is presented for the rectangular of small aspect ratio.

  16. Principal Component Geostatistical Approach for large-dimensional inverse problems

    PubMed Central

    Kitanidis, P K; Lee, J

    2014-01-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113

  17. Principal Component Geostatistical Approach for large-dimensional inverse problems.

    PubMed

    Kitanidis, P K; Lee, J

    2014-07-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.

  18. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    PubMed Central

    Schmidhuber, Jürgen

    2013-01-01

    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771

  19. BEYSIK: Language description and handbook for programmers (system for the collective use of the Institute of Space Research, Academy of Sciences USSR)

    NASA Technical Reports Server (NTRS)

    Orlov, I. G.

    1979-01-01

    The BASIC algorithmic language is described, and a guide is presented for the programmer using the language interpreter. The high-level algorithm BASIC is a problem-oriented programming language intended for solution of computational and engineering problems.

  20. BASIC Simulation Programs; Volumes I and II. Biology, Earth Science, Chemistry.

    ERIC Educational Resources Information Center

    Digital Equipment Corp., Maynard, MA.

    Computer programs which teach concepts and processes related to biology, earth science, and chemistry are presented. The seven biology problems deal with aspects of genetics, evolution and natural selection, gametogenesis, enzymes, photosynthesis, and the transport of material across a membrane. Four earth science problems concern climates, the…

  1. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  2. Can Dyscalculics Estimate the Results of Arithmetic Problems?

    ERIC Educational Resources Information Center

    Ganor-Stern, Dana

    2017-01-01

    The present study is the first to examine the computation estimation skills of dyscalculics versus controls using the estimation comparison task. In this task, participants judged whether an estimated answer to a multidigit multiplication problem was larger or smaller than a given reference number. While dyscalculics were less accurate than…

  3. Adults, Computers and Problem Solving: "What's the Problem?" OECD Skills Studies

    ERIC Educational Resources Information Center

    Chung, Ji Eun; Elliott, Stuart

    2015-01-01

    The "OECD Skills Studies" series aims to provide a strategic approach to skills policies. It presents OECD internationally comparable indicators and policy analysis covering issues such as: quality of education and curricula; transitions from school to work; vocational education and training (VET); employment and unemployment; innovative…

  4. Rotordynamic Instability Problems in High-Performance Turbomachinery, 1990

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The present workshop continues to report field experience and experimental results, and it expands the use of computational and control techniques with the integration of damper, bearing, and eccentric seal operation results. The intent of the workshop was to provide a continuing impetus for an understanding and resolution of these problems.

  5. Ethical considerations in revision rhinoplasty.

    PubMed

    Wayne, Ivan

    2012-08-01

    The problems that arise when reviewing another surgeon's work, the financial aspects of revision surgery, and the controversies that present in marketing and advertising will be explored. The technological advances of computer imaging and the Internet have introduced new problems that require our additional consideration. Copyright © 2012 by Thieme Medical Publishers, Inc.

  6. Complex optimization for big computational and experimental neutron datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  7. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  8. Government regulations and other influences on the medical use of computers.

    PubMed

    Mishelevich, D J; Grams, R R; Mize, S G; Smith, J P

    1979-01-01

    This paper presents points brought out in a panel discussion held at the 12th Hawaiian International Conference on System Sciences, January 1979. The session was attended by approximately two dozen interested parties from various segments of the academic, government, and health care communities. The broad categories covered include the specific problems of government regulations and their impact on specific clinical information systems installed at The University of Texas Health Science Center at Dallas, opportunities in a regulated environment, problems in a regulated environment, vendor-related issues in the marketing and manufacture of computer-based information systems, rational approaches to government control, and specific issues related to medical computer science.

  9. Grid adaption based on modified anisotropic diffusion equations formulated in the parametic domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagmeijer, R.

    1994-11-01

    A new grid-adaption algorithm for problems in computational fluid dynamics is presented. The basic equations are derived from a variational problem formulated in the parametric domain of the mapping that defines the existing grid. Modification of the basic equations provides desirable properties in boundary layers. The resulting modified anisotropic diffusion equations are solved for the computational coordinates as functions of the parametric coordinates and these functions are numerically inverted. Numerical examples show that the algorithm is robust, that shocks and boundary layers are well-resolved on the adapted grid, and that the flow solution becomes a globally smooth function of themore » computational coordinates.« less

  10. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach. Thereafter, we focus on a shape optimization problem for an Apollo-like reentry capsule. The optimization seeks to enhance the lift-to-drag ratio of the capsule by modifyjing the shape of its heat-shield in conjunction with a center-of-gravity (c.g.) offset. This multipoint and multi-objective optimization problem is used to demonstrate the overall effectiveness of the Cartesian adjoint method for addressing the issues of complex aerodynamic design. This abstract presents only a brief outline of the numerical method and results; full details will be given in the final paper.

  11. Solving the transport equation with quadratic finite elements: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, J.M.

    1997-12-31

    At the 4th Joint Conference on Computational Mathematics, the author presented a paper introducing a new quadratic finite element scheme (QFEM) for solving the transport equation. In the ensuing year the author has obtained considerable experience in the application of this method, including solution of eigenvalue problems, transmission problems, and solution of the adjoint form of the equation as well as the usual forward solution. He will present detailed results, and will also discuss other refinements of his transport codes, particularly for 3-dimensional problems on rectilinear and non-rectilinear grids.

  12. Clocks in Feynman's computer and Kitaev's local Hamiltonian: Bias, gaps, idling, and pulse tuning

    NASA Astrophysics Data System (ADS)

    Caha, Libor; Landau, Zeph; Nagaj, Daniel

    2018-06-01

    We present a collection of results about the clock in Feynman's computer construction and Kitaev's local Hamiltonian problem. First, by analyzing the spectra of quantum walks on a line with varying end-point terms, we find a better lower bound on the gap of the Feynman Hamiltonian, which translates into a less strict promise gap requirement for the quantum-Merlin-Arthur-complete local Hamiltonian problem. We also translate this result into the language of adiabatic quantum computation. Second, introducing an idling clock construction with a large state space but fast Cesaro mixing, we provide a way for achieving an arbitrarily high success probability of computation with Feynman's computer with only a logarithmic increase in the number of clock qubits. Finally, we tune and thus improve the costs (locality and gap scaling) of implementing a (pulse) clock with a single excitation.

  13. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  14. Employing subgoals in computer programming education

    NASA Astrophysics Data System (ADS)

    Margulieux, Lauren E.; Catrambone, Richard; Guzdial, Mark

    2016-01-01

    The rapid integration of technology into our professional and personal lives has left many education systems ill-equipped to deal with the influx of people seeking computing education. To improve computing education, we are applying techniques that have been developed for other procedural fields. The present study applied such a technique, subgoal labeled worked examples, to explore whether it would improve programming instruction. The first two experiments, conducted in a laboratory, suggest that the intervention improves undergraduate learners' problem-solving performance and affects how learners approach problem-solving. The third experiment demonstrates that the intervention has similar, and perhaps stronger, effects in an online learning environment with in-service K-12 teachers who want to become qualified to teach computing courses. By implementing this subgoal intervention as a tool for educators to teach themselves and their students, education systems could improve computing education and better prepare learners for an increasingly technical world.

  15. Computational modelling of cellular level metabolism

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Heino, J.; Somersalo, E.

    2008-07-01

    The steady and stationary state inverse problems consist of estimating the reaction and transport fluxes, blood concentrations and possibly the rates of change of some of the concentrations based on data which are often scarce noisy and sampled over a population. The Bayesian framework provides a natural setting for the solution of this inverse problem, because a priori knowledge about the system itself and the unknown reaction fluxes and transport rates can compensate for the insufficiency of measured data, provided that the computational costs do not become prohibitive. This article identifies the computational challenges which have to be met when analyzing the steady and stationary states of multicompartment model for cellular metabolism and suggest stable and efficient ways to handle the computations. The outline of a computational tool based on the Bayesian paradigm for the simulation and analysis of complex cellular metabolic systems is also presented.

  16. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  17. Near Field Trailing Edge Tone Noise Computation

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.

    2002-01-01

    Blunt trailing edges in a flow often generate tone noise due to wall-jet shear layer and vortex shedding. In this paper, the space-time conservation element (CE/SE) method is employed to numerically study the near-field noise of blunt trailing edges. Two typical cases, namely, flow past a circular cylinder (aeolian noise problem) and flow past a flat plate of finite thickness are considered. The computed frequencies compare well with experimental data. For the aeolian noise problem, comparisons with the results of other numerical approaches are also presented.

  18. A New Domain Decomposition Approach for the Gust Response Problem

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.

    2002-01-01

    A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.

  19. Least-Squares Spectral Element Solutions to the CAA Workshop Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Lin, Wen H.; Chan, Daniel C.

    1997-01-01

    This paper presents computed results for some of the CAA benchmark problems via the acoustic solver developed at Rocketdyne CFD Technology Center under the corporate agreement between Boeing North American, Inc. and NASA for the Aerospace Industry Technology Program. The calculations are considered as benchmark testing of the functionality, accuracy, and performance of the solver. Results of these computations demonstrate that the solver is capable of solving the propagation of aeroacoustic signals. Testing of sound generation and on more realistic problems is now pursued for the industrial applications of this solver. Numerical calculations were performed for the second problem of Category 1 of the current workshop problems for an acoustic pulse scattered from a rigid circular cylinder, and for two of the first CAA workshop problems, i. e., the first problem of Category 1 for the propagation of a linear wave and the first problem of Category 4 for an acoustic pulse reflected from a rigid wall in a uniform flow of Mach 0.5. The aim for including the last two problems in this workshop is to test the effectiveness of some boundary conditions set up in the solver. Numerical results of the last two benchmark problems have been compared with their corresponding exact solutions and the comparisons are excellent. This demonstrates the high fidelity of the solver in handling wave propagation problems. This feature lends the method quite attractive in developing a computational acoustic solver for calculating the aero/hydrodynamic noise in a violent flow environment.

  20. An advanced teaching scheme for integrating problem-based learning in control education

    NASA Astrophysics Data System (ADS)

    Juuso, Esko K.

    2018-03-01

    Engineering education needs to provide both theoretical knowledge and problem-solving skills. Many topics can be presented in lectures and computer exercises are good tools in teaching the skills. Learning by doing is combined with lectures to provide additional material and perspectives. The teaching scheme includes lectures, computer exercises, case studies, seminars and reports organized as a problem-based learning process. In the gradually refining learning material, each teaching method has its own role. The scheme, which has been used in teaching two 4th year courses, is beneficial for overall learning progress, especially in bilingual courses. The students become familiar with new perspectives and are ready to use the course material in application projects.

  1. Integer Linear Programming in Computational Biology

    NASA Astrophysics Data System (ADS)

    Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut

    Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.

  2. Stability of mixed time integration schemes for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Lin, J. I.

    1982-01-01

    A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.

  3. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  4. Homogenization of locally resonant acoustic metamaterials towards an emergent enriched continuum.

    PubMed

    Sridhar, A; Kouznetsova, V G; Geers, M G D

    This contribution presents a novel homogenization technique for modeling heterogeneous materials with micro-inertia effects such as locally resonant acoustic metamaterials. Linear elastodynamics is used to model the micro and macro scale problems and an extended first order Computational Homogenization framework is used to establish the coupling. Craig Bampton Mode Synthesis is then applied to solve and eliminate the microscale problem, resulting in a compact closed form description of the microdynamics that accurately captures the Local Resonance phenomena. The resulting equations represent an enriched continuum in which additional kinematic degrees of freedom emerge to account for Local Resonance effects which would otherwise be absent in a classical continuum. Such an approach retains the accuracy and robustness offered by a standard Computational Homogenization implementation, whereby the problem and the computational time are reduced to the on-line solution of one scale only.

  5. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  6. A Multifunctional Interface Method for Coupling Finite Element and Finite Difference Methods: Two-Dimensional Scalar-Field Problems

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2002-01-01

    A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.

  7. CoDA 2014 special issue: Exploring data-focused research across the department of energy: Editorial

    DOE PAGES

    Myers, Kary Lynn

    2015-10-05

    Here, this collection of papers, written by researchers at the national labs, in academia, and in industry present real problems, massive and complex datasets, and novel statistical approaches motivated by the challenges presented by experimental and computational science. You'll find explorations of the trajectories of aircraft and of the light curves of supernovae, of computer network intrusions and of nuclear forensics, of photovoltaics and overhead imagery.

  8. Sampling-Based Stochastic Sensitivity Analysis Using Score Functions for RBDO Problems with Correlated Random Variables

    DTIC Science & Technology

    2010-08-01

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables

  9. Fault Tolerant Parallel Implementations of Iterative Algorithms for Optimal Control Problems

    DTIC Science & Technology

    1988-01-21

    p/.V)] steps, but did not discuss any specific parallel implementation. Gajski [51 improved upon this result by performing the SIMD computation in...N = p2. our approach reduces to that of [51, except that Gajski presents the coefficient computation and partial solution phases as a single...8217>. the SIMD algo- rithm presented by Gajski [5] can be most efficiently mapped to a unidirec- tional ring network with broadcasting capability. Based

  10. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  11. Exact solution of large asymmetric traveling salesman problems.

    PubMed

    Miller, D L; Pekny, J F

    1991-02-15

    The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.

  12. Adjoint Sensitivity Analysis of Orbital Mechanics: Application to Computations of Observables' Partials with Respect to Harmonics of the Planetary Gravity Fields

    NASA Technical Reports Server (NTRS)

    Ustinov, Eugene A.; Sunseri, Richard F.

    2005-01-01

    An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.

  13. Attentional Selection in Object Recognition

    DTIC Science & Technology

    1993-02-01

    order. It also affects the choice of strategies in both the 24 A Computational Model of Attentional Selection filtering and arbiter stages. The set...such processing. In Treisman’s model this was hidden in the concept of the selection filter . Later computational models of attention tried to...This thesis presents a novel approach to the selection problem by propos. ing a computational model of visual attentional selection as a paradigm for

  14. Parallelism in Manipulator Dynamics. Revision.

    DTIC Science & Technology

    1983-12-01

    computing the motor torques required to drive a lower-pair kinematic chain (e.g., a typical manipulator arm in free motion, or a mechanical leg in the... computations , and presents two "mathematically exact" formulationsespecially suited to high-speed, highly parallel implementa- tions using special-purpose...YNAMICS by I(IIAR) IIAROLI) LATIROP .4ISTRACT This paper addresses the problem of efficiently computing the motor torques required to drive a lower-pair

  15. A direct numerical method for predicting concentration profiles in a turbulent boundary layer over a flat plate. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dow, J. W.

    1972-01-01

    A numerical solution of the turbulent mass transport equation utilizing the concept of eddy diffusivity is presented as an efficient method of investigating turbulent mass transport in boundary layer type flows. A FORTRAN computer program is used to study the two-dimensional diffusion of ammonia, from a line source on the surface, into a turbulent boundary layer over a flat plate. The results of the numerical solution are compared with experimental data to verify the results of the solution. Several other solutions to diffusion problems are presented to illustrate the versatility of the computer program and to provide some insight into the problem of mass diffusion as a whole.

  16. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  17. Subspace algorithms for identifying separable-in-denominator 2D systems with deterministic-stochastic inputs

    NASA Astrophysics Data System (ADS)

    Ramos, José A.; Mercère, Guillaume

    2016-12-01

    In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.

  18. Principles for the wise use of computers by children.

    PubMed

    Straker, L; Pollock, C; Maslen, B

    2009-11-01

    Computer use by children at home and school is now common in many countries. Child computer exposure varies with the type of computer technology available and the child's age, gender and social group. This paper reviews the current exposure data and the evidence for positive and negative effects of computer use by children. Potential positive effects of computer use by children include enhanced cognitive development and school achievement, reduced barriers to social interaction, enhanced fine motor skills and visual processing and effective rehabilitation. Potential negative effects include threats to child safety, inappropriate content, exposure to violence, bullying, Internet 'addiction', displacement of moderate/vigorous physical activity, exposure to junk food advertising, sleep displacement, vision problems and musculoskeletal problems. The case for child specific evidence-based guidelines for wise use of computers is presented based on children using computers differently to adults, being physically, cognitively and socially different to adults, being in a state of change and development and the potential to impact on later adult risk. Progress towards child-specific guidelines is reported. Finally, a set of guideline principles is presented as the basis for more detailed guidelines on the physical, cognitive and social impact of computer use by children. The principles cover computer literacy, technology safety, child safety and privacy and appropriate social, cognitive and physical development. The majority of children in affluent communities now have substantial exposure to computers. This is likely to have significant effects on child physical, cognitive and social development. Ergonomics can provide and promote guidelines for wise use of computers by children and by doing so promote the positive effects and reduce the negative effects of computer-child, and subsequent computer-adult, interaction.

  19. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    NASA Astrophysics Data System (ADS)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  20. Structured Problem Solving and the Basic Graphic Methods within a Total Quality Leadership Setting: Case Study

    DTIC Science & Technology

    1992-02-01

    develop,, and maintains computer programs for the Department of the Navy. It provides life cycle support for over 50 computer programs installed at over...the computer programs . Table 4 presents a list of possible product or output measures of functionality for ACDS Block 0 programs . Examples of output...were identified as important "causes" of process performance. Functionality of the computer programs was the result or "effect" of the combination of

  1. On the role of minicomputers in structural design

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1977-01-01

    Results are presented of exploratory studies on the use of a minicomputer in conjunction with large-scale computers to perform structural design tasks, including data and program management, use of interactive graphics, and computations for structural analysis and design. An assessment is made of minicomputer use for the structural model definition and checking and for interpreting results. Included are results of computational experiments demonstrating the advantages of using both a minicomputer and a large computer to solve a large aircraft structural design problem.

  2. Interfacing laboratory instruments to multiuser, virtual memory computers

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Stang, David B.; Roth, Don J.

    1989-01-01

    Incentives, problems and solutions associated with interfacing laboratory equipment with multiuser, virtual memory computers are presented. The major difficulty concerns how to utilize these computers effectively in a medium sized research group. This entails optimization of hardware interconnections and software to facilitate multiple instrument control, data acquisition and processing. The architecture of the system that was devised, and associated programming and subroutines are described. An example program involving computer controlled hardware for ultrasonic scan imaging is provided to illustrate the operational features.

  3. Improved computer programs for calculating potential flow in propulsion system inlets

    NASA Technical Reports Server (NTRS)

    Stockman, N. O.; Farrell, C. A., Jr.

    1977-01-01

    Computer programs to calculate the incompressible potential flow corrected for compressibility in axisymmetric inlets at arbitrary operating conditions are presented. Included are a statement of the problem to be solved, a description of each of the programs and sufficient documentation, including a test case, to enable a user to run the programs.

  4. Students' Use of Computational Thinking in Linear Algebra

    ERIC Educational Resources Information Center

    Bagley, Spencer; Rabin, Jeffrey M.

    2016-01-01

    In this work, we examine students' ways of thinking when presented with a novel linear algebra problem. Our intent was to explore how students employ and coordinate three modes of thinking, which we call computational, abstract, and geometric, following similar frameworks proposed by Hillel (2000) and Sierpinska (2000). However, the undergraduate…

  5. Chemical Engineering and Instructional Computing: Are They in Step? (Part 2).

    ERIC Educational Resources Information Center

    Seider, Warren D.

    1988-01-01

    Describes the use of "CACHE IBM PC Lessons for Courses Other than Design and Control" as open-ended design oriented problems. Presents graphics from some of the software and discusses high-resolution graphics workstations. Concludes that computing tools are in line with design and control practice in chemical engineering. (MVL)

  6. Automated Management Of Documents

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1995-01-01

    Report presents main technical issues involved in computer-integrated documentation. Problems associated with automation of management and maintenance of documents analyzed from perspectives of artificial intelligence and human factors. Technologies that may prove useful in computer-integrated documentation reviewed: these include conventional approaches to indexing and retrieval of information, use of hypertext, and knowledge-based artificial-intelligence systems.

  7. User's Guide for SKETCH

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    2000-01-01

    A user's guide for the computer program SKETCH is presented on this disk. SKETCH solves a popular problem in computer graphics-the removal of hidden lines from images of solid objects. Examples and illustrations are included in the guide. Also included is the SKETCH program, so a user can incorporate the information into a particular software system.

  8. Strategic Flexibility in Computational Estimation for Chinese- and Canadian-Educated Adults

    ERIC Educational Resources Information Center

    Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke

    2014-01-01

    The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with…

  9. IP Addressing: Problem-Based Learning Approach on Computer Networks

    ERIC Educational Resources Information Center

    Jevremovic, Aleksandar; Shimic, Goran; Veinovic, Mladen; Ristic, Nenad

    2017-01-01

    The case study presented in this paper describes the pedagogical aspects and experience gathered while using an e-learning tool named IPA-PBL. Its main purpose is to provide additional motivation for adopting theoretical principles and procedures in a computer networks course. In the proposed model, the sequencing of activities of the learning…

  10. A Computer Algebra Approach to Solving Chemical Equilibria in General Chemistry

    ERIC Educational Resources Information Center

    Kalainoff, Melinda; Lachance, Russ; Riegner, Dawn; Biaglow, Andrew

    2012-01-01

    In this article, we report on a semester-long study of the incorporation into our general chemistry course, of advanced algebraic and computer algebra techniques for solving chemical equilibrium problems. The method presented here is an alternative to the commonly used concentration table method for describing chemical equilibria in general…

  11. A Placement Test for Computer Science: Design, Implementation, and Analysis

    ERIC Educational Resources Information Center

    Nugent, Gwen; Soh, Leen-Kiat; Samal, Ashok; Lang, Jeff

    2006-01-01

    An introductory CS1 course presents problems for educators and students due to students' diverse background in programming knowledge and exposure. Students who enroll in CS1 also have different expectations and motivations. Prompted by the curricular guidelines for undergraduate programmes in computer science released in 2001 by the ACM/IEEE, and…

  12. Computational Hemodynamic Simulation of Human Circulatory System under Altered Gravity

    NASA Technical Reports Server (NTRS)

    Kim. Chang Sung; Kiris, Cetin; Kwak, Dochan

    2003-01-01

    A computational hemodynamics approach is presented to simulate the blood flow through the human circulatory system under altered gravity conditions. Numerical techniques relevant to hemodynamics issues are introduced to non-Newtonian modeling for flow characteristics governed by red blood cells, distensible wall motion due to the heart pulse, and capillary bed modeling for outflow boundary conditions. Gravitational body force terms are added to the Navier-Stokes equations to study the effects of gravity on internal flows. Six-type gravity benchmark problems are originally presented to provide the fundamental understanding of gravitational effects on the human circulatory system. For code validation, computed results are compared with steady and unsteady experimental data for non-Newtonian flows in a carotid bifurcation model and a curved circular tube, respectively. This computational approach is then applied to the blood circulation in the human brain as a target problem. A three-dimensional, idealized Circle of Willis configuration is developed with minor arteries truncated based on anatomical data. Demonstrated is not only the mechanism of the collateral circulation but also the effects of gravity on the distensible wall motion and resultant flow patterns.

  13. Streaming Support for Data Intensive Cloud-Based Sequence Analysis

    PubMed Central

    Issa, Shadi A.; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J.; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation. PMID:23710461

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D.; McInnes, L. C.; Woodward, C.

    This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not alwaysmore » practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.« less

  15. Modelling human problem solving with data from an online game.

    PubMed

    Rach, Tim; Kirsch, Alexandra

    2016-11-01

    Since the beginning of cognitive science, researchers have tried to understand human strategies in order to develop efficient and adequate computational methods. In the domain of problem solving, the travelling salesperson problem has been used for the investigation and modelling of human solutions. We propose to extend this effort with an online game, in which instances of the travelling salesperson problem have to be solved in the context of a game experience. We report on our effort to design and run such a game, present the data contained in the resulting openly available data set and provide an outlook on the use of games in general for cognitive science research. In addition, we present three geometrical models mapping the starting point preferences in the problems presented in the game as the result of an evaluation of the data set.

  16. Making objective decisions in mechanical engineering problems

    NASA Astrophysics Data System (ADS)

    Raicu, A.; Oanta, E.; Sabau, A.

    2017-08-01

    Decision making process has a great influence in the development of a given project, the goal being to select an optimal choice in a given context. Because of its great importance, the decision making was studied using various science methods, finally being conceived the game theory that is considered the background for the science of logical decision making in various fields. The paper presents some basic ideas regarding the game theory in order to offer the necessary information to understand the multiple-criteria decision making (MCDM) problems in engineering. The solution is to transform the multiple-criteria problem in a one-criterion decision problem, using the notion of utility, together with the weighting sum model or the weighting product model. The weighted importance of the criteria is computed using the so-called Step method applied to a relation of preferences between the criteria. Two relevant examples from engineering are also presented. The future directions of research consist of the use of other types of criteria, the development of computer based instruments for decision making general problems and to conceive a software module based on expert system principles to be included in the Wiki software applications for polymeric materials that are already operational.

  17. A computer tool to support in design of industrial Ethernet.

    PubMed

    Lugli, Alexandre Baratella; Santos, Max Mauro Dias; Franco, Lucia Regina Horta Rodrigues

    2009-04-01

    This paper presents a computer tool to support in the project and development of an industrial Ethernet network, verifying the physical layer (cables-resistance and capacitance, scan time, network power supply-POE's concept "Power Over Ethernet" and wireless), and occupation rate (amount of information transmitted to the network versus the controller network scan time). These functions are accomplished without a single physical element installed in the network, using only simulation. The computer tool has a software that presents a detailed vision of the network to the user, besides showing some possible problems in the network, and having an extremely friendly environment.

  18. Computer-aided head film analysis: the University of California San Francisco method.

    PubMed

    Baumrind, S; Miller, D M

    1980-07-01

    Computer technology is already assuming an important role in the management of orthodontic practices. The next 10 years are likely to see expansion in computer usage into the areas of diagnosis, treatment planning, and treatment-record keeping. In the areas of diagnosis and treatment planning, one of the first problems to be attacked will be the automation of head film analysis. The problems of constructing computer-aided systems for this purpose are considered herein in the light of the authors' 10 years of experience in developing a similar system for research purposes. The need for building in methods for automatic detection and correction of gross errors is discussed and the authors' method for doing so is presented. The construction of a rudimentary machine-readable data base for research and clinical purposes is described.

  19. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  20. Quantum Iterative Deepening with an Application to the Halting Problem

    PubMed Central

    Tarrataca, Luís; Wichert, Andreas

    2013-01-01

    Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines. PMID:23520465

  1. Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach.

    PubMed

    Zakov, Shay; Tsur, Dekel; Ziv-Ukelson, Michal

    2011-08-18

    RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms.

  2. Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach

    PubMed Central

    2011-01-01

    Background RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. Results We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. Conclusions The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms. PMID:21851589

  3. The Solar Swan Dive.

    ERIC Educational Resources Information Center

    Dilsaver, John S.; Siler, Joseph R.

    1991-01-01

    Solutions for a problem in which the time necessary for an object to fall into the sun from the average distance from the earth to the sun are presented. Both calculus- and noncalculus-based solutions are presented. A sample computer solution is included. (CW)

  4. Artificial Intelligence: Themes in the Second Decade. Memo Number 67.

    ERIC Educational Resources Information Center

    Feigenbaum, Edward A.

    The text of an invited address on artificial intelligence (AI) research over the 1963-1968 period is presented. A survey of recent studies on the computer simulation of intellective processes emphasizes developments in heuristic programing, problem-solving and closely related learning models. Progress and problems in these areas are indicated by…

  5. Solving the Water Jugs Problem by an Integer Sequence Approach

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2012-01-01

    In this article, we present an integer sequence approach to solve the classic water jugs problem. The solution steps can be obtained easily by additions and subtractions only, which is suitable for manual calculation or programming by computer. This approach can be introduced to secondary and undergraduate students, and also to teachers and…

  6. University Internet Services: Problems and Opportunities.

    ERIC Educational Resources Information Center

    Phan, Dien D.; Chen, Jim Q.

    This paper presents the findings of a study on the use of World Wide Web among students at St. Cloud State University, Minnesota, USA. The paper explores problems and challenges on campus Web computing and the relationships among the extent of Web usage, class level, and overall student academic performance. Specifically, the purposes of this…

  7. Ideas in Practice (3): A Simulated Laboratory Experience in Digital Design.

    ERIC Educational Resources Information Center

    Cleaver, Thomas G.

    1988-01-01

    Gives an example of the use of a simplified logic simulator in a logic design course. Discusses some problems in logic design classes, commercially available software, and software problems. Describes computer-aided engineering (CAE) software. Lists 14 experiments in the simulated laboratory and presents students' evaluation of the course. (YP)

  8. Green's function solution to radiative heat transfer between longitudinal gray fins

    NASA Technical Reports Server (NTRS)

    Frankel, J. I.; Silvestri, J. J.

    1991-01-01

    A demonstration is presented of the applicability and versatility of a pure integral formulation for radiative-conductive heat-transfer problems. Preliminary results have been obtained which indicate that this formulation allows an accurate, fast, and stable computation procedure to be implemented. Attention is given to the accessory problem defining Green's function.

  9. Flux-Based Finite Volume representations for general thermal problems

    NASA Technical Reports Server (NTRS)

    Mohan, Ram V.; Tamma, Kumar K.

    1993-01-01

    Flux-Based Finite Volume (FV) element representations for general thermal problems are given in conjunction with a generalized trapezoidal gamma-T family of algorithms, formulated following the spirit of what we term as the Lax-Wendroff based FV formulations. The new flux-based representations introduced offer an improved physical interpretation of the problem along with computationally convenient and attractive features. The space and time discretization emanate from a conservation form of the governing equation for thermal problems, and in conjunction with the flux-based element representations give rise to a physically improved and locally conservative numerical formulations. The present representations seek to involve improved locally conservative properties, improved physical representations and computational features; these are based on a 2D, bilinear FV element and can be extended for other cases. Time discretization based on a gamma-T family of algorithms in the spirit of a Lax-Wendroff based FV formulations are employed. Numerical examples involving linear/nonlinear steady and transient situations are shown to demonstrate the applicability of the present representations for thermal analysis situations.

  10. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  11. Searching for memories, Sudoku, implicit check bits, and the iterative use of not-always-correct rapid neural computation.

    PubMed

    Hopfield, J J

    2008-05-01

    The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve easy problems but for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual pop-out. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of energy or Lyapunov functions, is described in detail.

  12. Solving the water jugs problem by an integer sequence approach

    NASA Astrophysics Data System (ADS)

    Man, Yiu-Kwong

    2012-01-01

    In this article, we present an integer sequence approach to solve the classic water jugs problem. The solution steps can be obtained easily by additions and subtractions only, which is suitable for manual calculation or programming by computer. This approach can be introduced to secondary and undergraduate students, and also to teachers and lecturers involved in teaching mathematical problem solving, recreational mathematics, or elementary number theory.

  13. Linear Programming and Its Application to Pattern Recognition Problems

    NASA Technical Reports Server (NTRS)

    Omalley, M. J.

    1973-01-01

    Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.

  14. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    PubMed

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  15. FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)

    NASA Astrophysics Data System (ADS)

    Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.

    2011-04-01

    A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.

  16. Color appearance for photorealistic image synthesis

    NASA Astrophysics Data System (ADS)

    Marini, Daniele; Rizzi, Alessandro; Rossi, Maurizio

    2000-12-01

    Photorealistic Image Synthesis is a relevant research and application field in computer graphics, whose aim is to produce synthetic images that are undistinguishable from real ones. Photorealism is based upon accurate computational models of light material interaction, that allow us to compute the spectral intensity light field of a geometrically described scene. The fundamental methods are ray tracing and radiosity. While radiosity allows us to compute the diffuse component of the emitted and reflected light, applying ray tracing in a two pass solution we can also cope with non diffuse properties of the model surfaces. Both methods can be implemented to generate an accurate photometric distribution of light of the simulated environment. A still open problem is the visualization phase, whose purpose is to display the final result of the simulated mode on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. Recently some scholars have addressed this problem considering the perception stage of image formation, so including a model of the human visual system in the visualization process. In this paper we present a working hypothesis to solve the tone reproduction problem of synthetic image generation, integrating Retinex perception model into the photo realistic image synthesis context.

  17. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.

  18. Self-organization in neural networks - Applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat; Fu, B.; Berke, Laszlo

    1993-01-01

    The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.

  19. Thermo-mechanical design aspects of mercury bombardment ion thrusters.

    NASA Technical Reports Server (NTRS)

    Schnelker, D. E.; Kami, S.

    1972-01-01

    The mechanical design criteria are presented as background considerations for solving problems associated with the thermomechanical design of mercury ion bombardment thrusters. Various analytical procedures are used to aid in the development of thruster subassemblies and components in the fields of heat transfer, vibration, and stress analysis. Examples of these techniques which provide computer solutions to predict and control stress levels encountered during launch and operation of thruster systems are discussed. Computer models of specific examples are presented.

  20. Nuclear Fuel Depletion Analysis Using Matlab Software

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Nematollahi, M. R.

    Coupled first order IVPs are frequently used in many parts of engineering and sciences. In this article, we presented a code including three computer programs which are joint with the Matlab software to solve and plot the solutions of the first order coupled stiff or non-stiff IVPs. Some engineering and scientific problems related to IVPs are given and fuel depletion (production of the 239Pu isotope) in a Pressurized Water Nuclear Reactor (PWR) are computed by the present code.

Top