Sample records for large complex problems

  1. A brief historical introduction to Euler's formula for polyhedra, topology, graph theory and networks

    NASA Astrophysics Data System (ADS)

    Debnath, Lokenath

    2010-09-01

    This article is essentially devoted to a brief historical introduction to Euler's formula for polyhedra, topology, theory of graphs and networks with many examples from the real-world. Celebrated Königsberg seven-bridge problem and some of the basic properties of graphs and networks for some understanding of the macroscopic behaviour of real physical systems are included. We also mention some important and modern applications of graph theory or network problems from transportation to telecommunications. Graphs or networks are effectively used as powerful tools in industrial, electrical and civil engineering, communication networks in the planning of business and industry. Graph theory and combinatorics can be used to understand the changes that occur in many large and complex scientific, technical and medical systems. With the advent of fast large computers and the ubiquitous Internet consisting of a very large network of computers, large-scale complex optimization problems can be modelled in terms of graphs or networks and then solved by algorithms available in graph theory. Many large and more complex combinatorial problems dealing with the possible arrangements of situations of various kinds, and computing the number and properties of such arrangements can be formulated in terms of networks. The Knight's tour problem, Hamilton's tour problem, problem of magic squares, the Euler Graeco-Latin squares problem and their modern developments in the twentieth century are also included.

  2. Differential Relations between Facets of Complex Problem Solving and Students' Immigration Background

    ERIC Educational Resources Information Center

    Sonnleitner, Philipp; Brunner, Martin; Keller, Ulrich; Martin, Romain

    2014-01-01

    Whereas the assessment of complex problem solving (CPS) has received increasing attention in the context of international large-scale assessments, its fairness in regard to students' cultural background has gone largely unexplored. On the basis of a student sample of 9th-graders (N = 299), including a representative number of immigrant students (N…

  3. Influence of the large-small split effect on strategy choice in complex subtraction.

    PubMed

    Xiang, Yan Hui; Wu, Hao; Shang, Rui Hong; Chao, Xiaomei; Ren, Ting Ting; Zheng, Li Ling; Mo, Lei

    2018-04-01

    Two main theories have been used to explain the arithmetic split effect: decision-making process theory and strategy choice theory. Using the inequality paradigm, previous studies have confirmed that individuals tend to adopt a plausibility-checking strategy and a whole-calculation strategy to solve large and small split problems in complex addition arithmetic, respectively. This supports strategy choice theory, but it is unknown whether this theory also explains performance in solving different split problems in complex subtraction arithmetic. This study used small, intermediate and large split sizes, with each split condition being further divided into problems requiring and not requiring borrowing. The reaction times (RTs) for large and intermediate splits were significantly shorter than those for small splits, while accuracy was significantly higher for large and middle splits than for small splits, reflecting no speed-accuracy trade-off. Further, RTs and accuracy differed significantly between the borrow and no-borrow conditions only for small splits. This study indicates that strategy choice theory is suitable to explain the split effect in complex subtraction arithmetic. That is, individuals tend to choose the plausibility-checking strategy or the whole-calculation strategy according to the split size. © 2016 International Union of Psychological Science.

  4. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  5. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  6. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  7. Students' Explanations in Complex Learning of Disciplinary Programming

    ERIC Educational Resources Information Center

    Vieira, Camilo

    2016-01-01

    Computational Science and Engineering (CSE) has been denominated as the third pillar of science and as a set of important skills to solve the problems of a global society. Along with the theoretical and the experimental approaches, computation offers a third alternative to solve complex problems that require processing large amounts of data, or…

  8. Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim

    2013-01-01

    Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…

  9. The Development of Complex Problem Solving in Adolescence: A Latent Growth Curve Analysis

    ERIC Educational Resources Information Center

    Frischkorn, Gidon T.; Greiff, Samuel; Wüstenberg, Sascha

    2014-01-01

    Complex problem solving (CPS) as a cross-curricular competence has recently attracted more attention in educational psychology as indicated by its implementation in international educational large-scale assessments such as the Programme for International Student Assessment. However, research on the development of CPS is scarce, and the few…

  10. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  11. Wicked Problems in Large Organizations: Why Pilot Retention Continues to Challenge the Air Force

    DTIC Science & Technology

    2017-05-25

    ABSTRACT This monograph in military studies investigates the makeup of and approach to complex problems, with a case study on the Air Force’s...priorities, as well as a short, recent history of the pilot retention problem. Following that is a case study on the work done by the Air Staff in...Lonsberry, USAF, 38 pages. This monograph in military studies investigates the makeup of and approach to complex problems, with a case study on the

  12. Enabling Controlling Complex Networks with Local Topological Information.

    PubMed

    Li, Guoqi; Deng, Lei; Xiao, Gaoxi; Tang, Pei; Wen, Changyun; Hu, Wuhua; Pei, Jing; Shi, Luping; Stanley, H Eugene

    2018-03-15

    Complex networks characterize the nature of internal/external interactions in real-world systems including social, economic, biological, ecological, and technological networks. Two issues keep as obstacles to fulfilling control of large-scale networks: structural controllability which describes the ability to guide a dynamical system from any initial state to any desired final state in finite time, with a suitable choice of inputs; and optimal control, which is a typical control approach to minimize the cost for driving the network to a predefined state with a given number of control inputs. For large complex networks without global information of network topology, both problems remain essentially open. Here we combine graph theory and control theory for tackling the two problems in one go, using only local network topology information. For the structural controllability problem, a distributed local-game matching method is proposed, where every node plays a simple Bayesian game with local information and local interactions with adjacent nodes, ensuring a suboptimal solution at a linear complexity. Starring from any structural controllability solution, a minimizing longest control path method can efficiently reach a good solution for the optimal control in large networks. Our results provide solutions for distributed complex network control and demonstrate a way to link the structural controllability and optimal control together.

  13. Complex Problem Solving in Educational Contexts--Something beyond "g": Concept, Assessment, Measurement Invariance, and Construct Validity

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno

    2013-01-01

    Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…

  14. Assessment of Complex Problem Solving: What We Know and What We Don't Know

    ERIC Educational Resources Information Center

    Herde, Christoph Nils; Wüstenberg, Sascha; Greiff, Samuel

    2016-01-01

    Complex Problem Solving (CPS) is seen as a cross-curricular 21st century skill that has attracted interest in large-scale-assessments. In the Programme for International Student Assessment (PISA) 2012, CPS was assessed all over the world to gain information on students' skills to acquire and apply knowledge while dealing with nontransparent…

  15. Designing Cognitive Complexity in Mathematical Problem-Solving Items

    ERIC Educational Resources Information Center

    Daniel, Robert C.; Embretson, Susan E.

    2010-01-01

    Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…

  16. Side effects of problem-solving strategies in large-scale nutrition science: towards a diversification of health.

    PubMed

    Penders, Bart; Vos, Rein; Horstman, Klasien

    2009-11-01

    Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.

  17. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  18. GRADIENT: Graph Analytic Approach for Discovering Irregular Events, Nascent and Temporal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie

    2015-03-31

    Finding a time-ordered signature within large graphs is a computationally complex problem due to the combinatorial explosion of potential patterns. GRADIENT is designed to search and understand that problem space.

  19. GRADIENT: Graph Analytic Approach for Discovering Irregular Events, Nascent and Temporal

    ScienceCinema

    Hogan, Emilie

    2018-01-16

    Finding a time-ordered signature within large graphs is a computationally complex problem due to the combinatorial explosion of potential patterns. GRADIENT is designed to search and understand that problem space.

  20. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  1. Biologically-inspired approaches for self-organization, adaptation, and collaboration of heterogeneous autonomous systems

    NASA Astrophysics Data System (ADS)

    Steinberg, Marc

    2011-06-01

    This paper presents a selective survey of theoretical and experimental progress in the development of biologicallyinspired approaches for complex surveillance and reconnaissance problems with multiple, heterogeneous autonomous systems. The focus is on approaches that may address ISR problems that can quickly become mathematically intractable or otherwise impractical to implement using traditional optimization techniques as the size and complexity of the problem is increased. These problems require dealing with complex spatiotemporal objectives and constraints at a variety of levels from motion planning to task allocation. There is also a need to ensure solutions are reliable and robust to uncertainty and communications limitations. First, the paper will provide a short introduction to the current state of relevant biological research as relates to collective animal behavior. Second, the paper will describe research on largely decentralized, reactive, or swarm approaches that have been inspired by biological phenomena such as schools of fish, flocks of birds, ant colonies, and insect swarms. Next, the paper will discuss approaches towards more complex organizational and cooperative mechanisms in team and coalition behaviors in order to provide mission coverage of large, complex areas. Relevant team behavior may be derived from recent advances in understanding of the social and cooperative behaviors used for collaboration by tens of animals with higher-level cognitive abilities such as mammals and birds. Finally, the paper will briefly discuss challenges involved in user interaction with these types of systems.

  2. Large-N -approximated field theory for multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Parisi, G.; Pascazio, S.; Scardicchio, A.

    2015-12-01

    We try to characterize the statistics of multipartite entanglement of the random states of an n -qubit system. Unable to solve the problem exactly we generalize it, replacing complex numbers with real vectors with Nc components (the original problem is recovered for Nc=2 ). Studying the leading diagrams in the large-Nc approximation, we unearth the presence of a phase transition and, in an explicit example, show that the so-called entanglement frustration disappears in the large-Nc limit.

  3. Numerical Modeling of Pulsed Electrical Discharges for High-Speed Flow Control

    DTIC Science & Technology

    2012-02-01

    dimensions , and later on more complex problems. Subsequent work compared different physical models for pulsed discharges: one-moment (drift-diffusion with...two dimensions , and later on more complex problems. Subsequent work compared different physical models for pulsed discharges: one-moment (drift...The state of a particle can be specified by its position and velocity. In principal, the motion of a large group of particles can be predicted from

  4. An evaluation of superminicomputers for thermal analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Vidal, J. B.; Jones, G. K.

    1982-01-01

    The use of superminicomputers for solving a series of increasingly complex thermal analysis problems is investigated. The approach involved (1) installation and verification of the SPAR thermal analyzer software on superminicomputers at Langley Research Center and Goddard Space Flight Center, (2) solution of six increasingly complex thermal problems on this equipment, and (3) comparison of solution (accuracy, CPU time, turnaround time, and cost) with solutions on large mainframe computers.

  5. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  6. Empirical results on scheduling and dynamic backtracking

    NASA Technical Reports Server (NTRS)

    Boddy, Mark S.; Goldman, Robert P.

    1994-01-01

    At the Honeywell Technology Center (HTC), we have been working on a scheduling problem related to commercial avionics. This application is large, complex, and hard to solve. To be a little more concrete: 'large' means almost 20,000 activities, 'complex' means several activity types, periodic behavior, and assorted types of temporal constraints, and 'hard to solve' means that we have been unable to eliminate backtracking through the use of search heuristics. At this point, we can generate solutions, where solutions exist, or report failure and sometimes why the system failed. To the best of our knowledge, this is among the largest and most complex scheduling problems to have been solved as a constraint satisfaction problem, at least that has appeared in the published literature. This abstract is a preliminary report on what we have done and how. In the next section, we present our approach to treating scheduling as a constraint satisfaction problem. The following sections present the application in more detail and describe how we solve scheduling problems in the application domain. The implemented system makes use of Ginsberg's Dynamic Backtracking algorithm, with some minor extensions to improve its utility for scheduling. We describe those extensions and the performance of the resulting system. The paper concludes with some general remarks, open questions and plans for future work.

  7. NGL Viewer: Web-based molecular graphics for large complexes.

    PubMed

    Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W

    2018-05-29

    The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.

  8. A novel heuristic algorithm for capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre

    2017-09-01

    The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.

  9. Solution Strategies and Achievement in Dutch Complex Arithmetic: Latent Variable Modeling of Change

    ERIC Educational Resources Information Center

    Hickendorff, Marian; Heiser, Willem J.; van Putten, Cornelis M.; Verhelst, Norman D.

    2009-01-01

    In the Netherlands, national assessments at the end of primary school (Grade 6) show a decline of achievement on problems of complex or written arithmetic over the last two decades. The present study aims at contributing to an explanation of the large achievement decrease on complex division, by investigating the strategies students used in…

  10. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  11. Dependency visualization for complex system understanding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smart, J. Allison Cory

    1994-09-01

    With the volume of software in production use dramatically increasing, the importance of software maintenance has become strikingly apparent. Techniques now sought and developed for reverse engineering and design extraction and recovery. At present, numerous commercial products and research tools exist which are capable of visualizing a variety of programming languages and software constructs. The list of new tools and services continues to grow rapidly. Although the scope of the existing commercial and academic product set is quite broad, these tools still share a common underlying problem. The ability of each tool to visually organize object representations is increasingly impairedmore » as the number of components and component dependencies within systems increases. Regardless of how objects are defined, complex ``spaghetti`` networks result in nearly all large system cases. While this problem is immediately apparent in modem systems analysis involving large software implementations, it is not new. As will be discussed in Chapter 2, related problems involving the theory of graphs were identified long ago. This important theoretical foundation provides a useful vehicle for representing and analyzing complex system structures. While the utility of directed graph based concepts in software tool design has been demonstrated in literature, these tools still lack the capabilities necessary for large system comprehension. This foundation must therefore be expanded with new organizational and visualization constructs necessary to meet this challenge. This dissertation addresses this need by constructing a conceptual model and a set of methods for interactively exploring, organizing, and understanding the structure of complex software systems.« less

  12. Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  13. Improved Intelligence Warning in an Age of Complexity

    DTIC Science & Technology

    2015-05-21

    at, and applying complexity science to this problem, which is represented by a multidiscipline study of large networks comprised of interdependent...For analysts and policy makers, complexity science offers methods to improve this understanding. As said by Ms. Irene Sanders, director of the... science to improve intelligence warning. The initial section describes how policy makers and national security leaders understand the current

  14. Systems engineering for very large systems

    NASA Technical Reports Server (NTRS)

    Lewkowicz, Paul E.

    1993-01-01

    Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.

  15. Systems engineering for very large systems

    NASA Astrophysics Data System (ADS)

    Lewkowicz, Paul E.

    Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.

  16. Explicit solution techniques for impact with contact constraints

    NASA Technical Reports Server (NTRS)

    Mccarty, Robert E.

    1993-01-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  17. Explicit solution techniques for impact with contact constraints

    NASA Astrophysics Data System (ADS)

    McCarty, Robert E.

    1993-08-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  18. Interaction Network Estimation: Predicting Problem-Solving Diversity in Interactive Environments

    ERIC Educational Resources Information Center

    Eagle, Michael; Hicks, Drew; Barnes, Tiffany

    2015-01-01

    Intelligent tutoring systems and computer aided learning environments aimed at developing problem solving produce large amounts of transactional data which make it a challenge for both researchers and educators to understand how students work within the environment. Researchers have modeled student-tutor interactions using complex networks in…

  19. Digital Maps, Matrices and Computer Algebra

    ERIC Educational Resources Information Center

    Knight, D. G.

    2005-01-01

    The way in which computer algebra systems, such as Maple, have made the study of complex problems accessible to undergraduate mathematicians with modest computational skills is illustrated by some large matrix calculations, which arise from representing the Earth's surface by digital elevation models. Such problems are often considered to lie in…

  20. Invasion complexity at large spatial scales is an emergent property of interactions among landscape characteristics and invader traits

    USDA-ARS?s Scientific Manuscript database

    Understanding the potential for invasive spread is an important consideration for novel agricultural species that may be translocated or introduced into new regions. However, estimating invasion risks remains a challenging problem, particularly in the context of real, complex landscapes. There is ...

  1. Handling a Small Dataset Problem in Prediction Model by employ Artificial Data Generation Approach: A Review

    NASA Astrophysics Data System (ADS)

    Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd

    2017-09-01

    The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.

  2. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  3. From path models to commands during additive printing of large-scale architectural designs

    NASA Astrophysics Data System (ADS)

    Chepchurov, M. S.; Zhukov, E. M.; Yakovlev, E. A.; Matveykin, V. G.

    2018-05-01

    The article considers the problem of automation of the formation of large complex parts, products and structures, especially for unique or small-batch objects produced by a method of additive technology [1]. Results of scientific research in search for the optimal design of a robotic complex, its modes of operation (work), structure of its control helped to impose the technical requirements on the technological process for manufacturing and design installation of the robotic complex. Research on virtual models of the robotic complexes allowed defining the main directions of design improvements and the main goal (purpose) of testing of the the manufactured prototype: checking the positioning accuracy of the working part.

  4. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  5. On the importance of the Cerulean and Golden-winged Warblers summits in the National Federation of Coffee Growers of Colombia in Bogotá

    Treesearch

    Paul B. Hamel

    2008-01-01

    Cerulean Warbler is a bird with problems; this migratorybird lives in environments on which large numbersof people depend for an adequate productivelivelihood, energy, high quality wood products, coffee,and cacao. Solving the biological problems of thisspecies in its complex...

  6. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

  7. The application of interactive graphics to large time-dependent hydrodynamics problems

    NASA Technical Reports Server (NTRS)

    Gama-Lobo, F.; Maas, L. D.

    1975-01-01

    A written companion of a movie entitled "Interactive Graphics at Los Alamos Scientific Laboratory" was presented. While the movie presents the actual graphics terminal and the functions performed on it, the paper attempts to put in perspective the complexity of the application code and the complexity of the interaction that is possible.

  8. Evaluating Action Learning: A Critical Realist Complex Network Theory Approach

    ERIC Educational Resources Information Center

    Burgoyne, John G.

    2010-01-01

    This largely theoretical paper will argue the case for the usefulness of applying network and complex adaptive systems theory to an understanding of action learning and the challenge it is evaluating. This approach, it will be argued, is particularly helpful in the context of improving capability in dealing with wicked problems spread around…

  9. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  10. Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization

    NASA Astrophysics Data System (ADS)

    Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar

    2017-04-01

    Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.

  11. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less

  12. Big Data Analytics with Datalog Queries on Spark.

    PubMed

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  13. Big Data Analytics with Datalog Queries on Spark

    PubMed Central

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2017-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296

  14. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    PubMed

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  15. Multicriteria hierarchical iterative interactive algorithm for organizing operational modes of large heat supply systems

    NASA Astrophysics Data System (ADS)

    Korotkova, T. I.; Popova, V. I.

    2017-11-01

    The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.

  16. Mathematical Models to Determine Stable Behavior of Complex Systems

    NASA Astrophysics Data System (ADS)

    Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.

    2018-05-01

    The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.

  17. Putting problem formulation at the forefront of GMO risk analysis.

    PubMed

    Tepfer, Mark; Racovita, Monica; Craig, Wendy

    2013-01-01

    When applying risk assessment and the broader process of risk analysis to decisions regarding the dissemination of genetically modified organisms (GMOs), the process has a tendency to become remarkably complex. Further, as greater numbers of countries consider authorising the large-scale dissemination of GMOs, and as GMOs with more complex traits reach late stages of development, there has been increasing concern about the burden posed by the complexity of risk analysis. We present here an improved approach for GMO risk analysis that gives a central role to problem formulation. Further, the risk analysis strategy has been clarified and simplified in order to make rigorously scientific risk assessment and risk analysis more broadly accessible to diverse stakeholder groups.

  18. Statistical physics of hard combinatorial optimization: Vertex cover problem

    NASA Astrophysics Data System (ADS)

    Zhao, Jin-Hua; Zhou, Hai-Jun

    2014-07-01

    Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.

  19. Fast optimization algorithms and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad

    2017-11-01

    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.

  20. Massively parallel support for a case-based planning system

    NASA Technical Reports Server (NTRS)

    Kettler, Brian P.; Hendler, James A.; Anderson, William A.

    1993-01-01

    Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.

  1. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  2. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  3. Multicategory Composite Least Squares Classifiers

    PubMed Central

    Park, Seo Young; Liu, Yufeng; Liu, Dacheng; Scholl, Paul

    2010-01-01

    Classification is a very useful statistical tool for information extraction. In particular, multicategory classification is commonly seen in various applications. Although binary classification problems are heavily studied, extensions to the multicategory case are much less so. In view of the increased complexity and volume of modern statistical problems, it is desirable to have multicategory classifiers that are able to handle problems with high dimensions and with a large number of classes. Moreover, it is necessary to have sound theoretical properties for the multicategory classifiers. In the literature, there exist several different versions of simultaneous multicategory Support Vector Machines (SVMs). However, the computation of the SVM can be difficult for large scale problems, especially for problems with large number of classes. Furthermore, the SVM cannot produce class probability estimation directly. In this article, we propose a novel efficient multicategory composite least squares classifier (CLS classifier), which utilizes a new composite squared loss function. The proposed CLS classifier has several important merits: efficient computation for problems with large number of classes, asymptotic consistency, ability to handle high dimensional data, and simple conditional class probability estimation. Our simulated and real examples demonstrate competitive performance of the proposed approach. PMID:21218128

  4. Designs for Operationalizing Collaborative Problem Solving for Automated Assessment

    ERIC Educational Resources Information Center

    Scoular, Claire; Care, Esther; Hesse, Friedrich W.

    2017-01-01

    Collaborative problem solving is a complex skill set that draws on social and cognitive factors. The construct remains in its infancy due to lack of empirical evidence that can be drawn upon for validation. The differences and similarities between two large-scale initiatives that reflect this state of the art, in terms of underlying assumptions…

  5. From Science to Business: Preparing Female Scientists and Engineers for Successful Transitions into Entrepreneurship--Summary of a Workshop

    ERIC Educational Resources Information Center

    Didion, Catherine Jay; Guenther, Rita S.; Gunderson, Victoria

    2012-01-01

    Scientists, engineers, and medical professionals play a vital role in building the 21st- century science and technology enterprises that will create solutions and jobs critical to solving the large, complex, and interdisciplinary problems faced by society: problems in energy, sustainability, the environment, water, food, disease, and healthcare.…

  6. Cultural Context of School Communities in Rural Hawaii to Inform Youth Violence Prevention

    ERIC Educational Resources Information Center

    Affonso, Dyanne D.; Mayberry, Linda; Shibuya, June Y.; Archambeau, Olga G.; Correa, Mary; Deliramich, Aimee N.; Frueh, B. Christopher

    2010-01-01

    Background: Escalation of youth violence within a large geographic school-complex area in southeastern rural Hawaii became a major problem in 2006. How cultural forces impact the problem was an impetus to examine youth violence from perspectives of adults and children in rural communities. Gathering these data was an essential first step toward…

  7. Getting Along: Negotiating Authority in High Schools. Final Report.

    ERIC Educational Resources Information Center

    Farrar, Eleanor; Neufeld, Barbara

    Appropriate responses to the authority problem in schools can be informed by a more complex understanding of the issue. Also of importance is knowledge of the ways in which schools and society at large are involved with both the creation of and the solution to the problem of student/teacher authority relations. School people are referring…

  8. The Meaning of School from Dropout's View Point (A Phenomenological Study)

    ERIC Educational Resources Information Center

    Habibi; Setiawan, Cally

    2017-01-01

    Student dropouts are complex problems in Indonesia. Some of the dropouts living in rural areas have migrated to the large cities. It contributes to the child labor growth which is already one the major problems in Indonesia. Knowledge about the meaning of school from their perspective could be helpful for policy and programs related to dropout…

  9. The problem of ecological scaling in spatially complex, nonequilibrium ecological systems [chapter 3

    Treesearch

    Samuel A. Cushman; Jeremy Littell; Kevin McGarigal

    2010-01-01

    In the previous chapter we reviewed the challenges posed by spatial complexity and temporal disequilibrium to efforts to understand and predict the structure and dynamics of ecological systems. The central theme was that spatial variability in the environment and population processes fundamentally alters the interactions between species and their environments, largely...

  10. Using d15N of Chironomidae to help assess lake condition and possible stressors in EPA?s National Lakes Assessment.

    EPA Science Inventory

    Background/Questions/Methods As interest in continental-scale ecology increases to address large-scale ecological problems, ecologists need indicators of complex processes that can be collected quickly at many sites across large areas. We are exploring the utility of stable isot...

  11. Reducing stand densities in immature and mature stands, Applegate Watershed, Southwest Oregon.

    Treesearch

    Marty L. Main; Michael P. Amaranthus

    1996-01-01

    Abstract Throughout the Applegate watershed, dense, overstocked, immature stands of mixed conifers and hardwoods and declining stands of mature conifers present significant and complex silvicultural problems. Stand stagnation is common, as is loss of large-diameter conifers from insects and wildfire. Treatments designed to maintain or encourage development of large-...

  12. An overview of the genetic dissection of complex traits.

    PubMed

    Rao, D C

    2008-01-01

    Thanks to the recent revolutionary genomic advances such as the International HapMap consortium, resolution of the genetic architecture of common complex traits is beginning to look hopeful. While demonstrating the feasibility of genome-wide association (GWA) studies, the pathbreaking Wellcome Trust Case Control Consortium (WTCCC) study also serves to underscore the critical importance of very large sample sizes and draws attention to potential problems, which need to be addressed as part of the study design. Even the large WTCCC study had vastly inadequate power for several of the associations reported (and confirmed) and, therefore, most of the regions harboring relevant associations may not be identified anytime soon. This chapter provides an overview of some of the key developments in the methodological approaches to genetic dissection of common complex traits. Constrained Bayesian networks are suggested as especially useful for analysis of pathway-based SNPs. Likewise, composite likelihood is suggested as a promising method for modeling complex systems. It discusses the key steps in a study design, with an emphasis on GWA studies. Potential limitations highlighted by the WTCCC GWA study are discussed, including problems associated with massive genotype imputation, analysis of pooled national samples, shared controls, and the critical role of interactions. GWA studies clearly need massive sample sizes that are only possible through genuine collaborations. After all, for common complex traits, the question is not whether we can find some pieces of the puzzle, but how large and what kind of a sample we need to (nearly) solve the genetic puzzle.

  13. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  14. The role of artificial intelligence techniques in scheduling systems

    NASA Technical Reports Server (NTRS)

    Geoffroy, Amy L.; Britt, Daniel L.; Gohring, John R.

    1990-01-01

    Artificial Intelligence (AI) techniques provide good solutions for many of the problems which are characteristic of scheduling applications. However, scheduling is a large, complex heterogeneous problem. Different applications will require different solutions. Any individual application will require the use of a variety of techniques, including both AI and conventional software methods. The operational context of the scheduling system will also play a large role in design considerations. The key is to identify those places where a specific AI technique is in fact the preferable solution, and to integrate that technique into the overall architecture.

  15. Dynamic analysis of space structures including elastic, multibody, and control behavior

    NASA Technical Reports Server (NTRS)

    Pinson, Larry; Soosaar, Keto

    1989-01-01

    The problem is to develop analysis methods, modeling stategies, and simulation tools to predict with assurance the on-orbit performance and integrity of large complex space structures that cannot be verified on the ground. The problem must incorporate large reliable structural models, multi-body flexible dynamics, multi-tier controller interaction, environmental models including 1g and atmosphere, various on-board disturbances, and linkage to mission-level performance codes. All areas are in serious need of work, but the weakest link is multi-body flexible dynamics.

  16. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  17. An Improved Wake Vortex Tracking Algorithm for Multiple Aircraft

    NASA Technical Reports Server (NTRS)

    Switzer, George F.; Proctor, Fred H.; Ahmad, Nashat N.; LimonDuparcmeur, Fanny M.

    2010-01-01

    The accurate tracking of vortex evolution from Large Eddy Simulation (LES) data is a complex and computationally intensive problem. The vortex tracking requires the analysis of very large three-dimensional and time-varying datasets. The complexity of the problem is further compounded by the fact that these vortices are embedded in a background turbulence field, and they may interact with the ground surface. Another level of complication can arise, if vortices from multiple aircrafts are simulated. This paper presents a new technique for post-processing LES data to obtain wake vortex tracks and wake intensities. The new approach isolates vortices by defining "regions of interest" (ROI) around each vortex and has the ability to identify vortex pairs from multiple aircraft. The paper describes the new methodology for tracking wake vortices and presents application of the technique for single and multiple aircraft.

  18. Predicting protein complex geometries with a neural network.

    PubMed

    Chae, Myong-Ho; Krull, Florian; Lorenzen, Stephan; Knapp, Ernst-Walter

    2010-03-01

    A major challenge of the protein docking problem is to define scoring functions that can distinguish near-native protein complex geometries from a large number of non-native geometries (decoys) generated with noncomplexed protein structures (unbound docking). In this study, we have constructed a neural network that employs the information from atom-pair distance distributions of a large number of decoys to predict protein complex geometries. We found that docking prediction can be significantly improved using two different types of polar hydrogen atoms. To train the neural network, 2000 near-native decoys of even distance distribution were used for each of the 185 considered protein complexes. The neural network normalizes the information from different protein complexes using an additional protein complex identity input neuron for each complex. The parameters of the neural network were determined such that they mimic a scoring funnel in the neighborhood of the native complex structure. The neural network approach avoids the reference state problem, which occurs in deriving knowledge-based energy functions for scoring. We show that a distance-dependent atom pair potential performs much better than a simple atom-pair contact potential. We have compared the performance of our scoring function with other empirical and knowledge-based scoring functions such as ZDOCK 3.0, ZRANK, ITScore-PP, EMPIRE, and RosettaDock. In spite of the simplicity of the method and its functional form, our neural network-based scoring function achieves a reasonable performance in rigid-body unbound docking of proteins. Proteins 2010. (c) 2009 Wiley-Liss, Inc.

  19. Human factors in air traffic control: problems at the interfaces.

    PubMed

    Shouksmith, George

    2003-10-01

    The triangular ISIS model for describing the operation of human factors in complex sociotechnical organisations or systems is applied in this research to a large international air traffic control system. A large sample of senior Air Traffic Controllers were randomly assigned to small focus discussion groups, whose task was to identify problems occurring at the interfaces of the three major human factor components: individual, system impacts, and social. From these discussions, a number of significant interface problems, which could adversely affect the functioning of the Air Traffic Control System, emerged. The majority of these occurred at the Individual-System Impact and Individual-Social interfaces and involved a perceived need for further interface centered training.

  20. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach

    PubMed Central

    2016-01-01

    Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235

  1. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  2. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Treesearch

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  3. Network cosmology.

    PubMed

    Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S; Rideout, David; Meyer, David; Boguñá, Marián

    2012-01-01

    Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology.

  4. Network Cosmology

    PubMed Central

    Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S.; Rideout, David; Meyer, David; Boguñá, Marián

    2012-01-01

    Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology. PMID:23162688

  5. Problem decomposition by mutual information and force-based clustering

    NASA Astrophysics Data System (ADS)

    Otero, Richard Edward

    The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

  6. Parallel solution of sparse one-dimensional dynamic programming problems

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1989-01-01

    Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.

  7. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  8. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  9. An interior-point method-based solver for simulation of aircraft parts riveting

    NASA Astrophysics Data System (ADS)

    Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael

    2018-05-01

    The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.

  10. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  11. Environmental Sensing of Expert Knowledge in a Computational Evolution System for Complex Problem Solving in Human Genetics

    NASA Astrophysics Data System (ADS)

    Greene, Casey S.; Hill, Douglas P.; Moore, Jason H.

    The relationship between interindividual variation in our genomes and variation in our susceptibility to common diseases is expected to be complex with multiple interacting genetic factors. A central goal of human genetics is to identify which DNA sequence variations predict disease risk in human populations. Our success in this endeavour will depend critically on the development and implementation of computational intelligence methods that are able to embrace, rather than ignore, the complexity of the genotype to phenotype relationship. To this end, we have developed a computational evolution system (CES) to discover genetic models of disease susceptibility involving complex relationships between DNA sequence variations. The CES approach is hierarchically organized and is capable of evolving operators of any arbitrary complexity. The ability to evolve operators distinguishes this approach from artificial evolution approaches using fixed operators such as mutation and recombination. Our previous studies have shown that a CES that can utilize expert knowledge about the problem in evolved operators significantly outperforms a CES unable to use this knowledge. This environmental sensing of external sources of biological or statistical knowledge is important when the search space is both rugged and large as in the genetic analysis of complex diseases. We show here that the CES is also capable of evolving operators which exploit one of several sources of expert knowledge to solve the problem. This is important for both the discovery of highly fit genetic models and because the particular source of expert knowledge used by evolved operators may provide additional information about the problem itself. This study brings us a step closer to a CES that can solve complex problems in human genetics in addition to discovering genetic models of disease.

  12. u-w formulation for dynamic problems in large deformation regime solved through an implicit meshfree scheme

    NASA Astrophysics Data System (ADS)

    Navas, Pedro; Sanavia, Lorenzo; López-Querol, Susana; Yu, Rena C.

    2017-12-01

    Solving dynamic problems for fluid saturated porous media at large deformation regime is an interesting but complex issue. An implicit time integration scheme is herein developed within the framework of the u-w (solid displacement-relative fluid displacement) formulation for the Biot's equations. In particular, liquid water saturated porous media is considered and the linearization of the linear momentum equations taking into account all the inertia terms for both solid and fluid phases is for the first time presented. The spatial discretization is carried out through a meshfree method, in which the shape functions are based on the principle of local maximum entropy LME. The current methodology is firstly validated with the dynamic consolidation of a soil column and the plastic shear band formulation of a square domain loaded by a rigid footing. The feasibility of this new numerical approach for solving large deformation dynamic problems is finally demonstrated through the application to an embankment problem subjected to an earthquake.

  13. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.

  14. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  15. Efficient methods and readily customizable libraries for managing complexity of large networks.

    PubMed

    Dogrusoz, Ugur; Karacelik, Alper; Safarli, Ilkin; Balci, Hasan; Dervishi, Leonard; Siper, Metin Can

    2018-01-01

    One common problem in visualizing real-life networks, including biological pathways, is the large size of these networks. Often times, users find themselves facing slow, non-scaling operations due to network size, if not a "hairball" network, hindering effective analysis. One extremely useful method for reducing complexity of large networks is the use of hierarchical clustering and nesting, and applying expand-collapse operations on demand during analysis. Another such method is hiding currently unnecessary details, to later gradually reveal on demand. Major challenges when applying complexity reduction operations on large networks include efficiency and maintaining the user's mental map of the drawing. We developed specialized incremental layout methods for preserving a user's mental map while managing complexity of large networks through expand-collapse and hide-show operations. We also developed open-source JavaScript libraries as plug-ins to the web based graph visualization library named Cytsocape.js to implement these methods as complexity management operations. Through efficient specialized algorithms provided by these extensions, one can collapse or hide desired parts of a network, yielding potentially much smaller networks, making them more suitable for interactive visual analysis. This work fills an important gap by making efficient implementations of some already known complexity management techniques freely available to tool developers through a couple of open source, customizable software libraries, and by introducing some heuristics which can be applied upon such complexity management techniques to ensure preserving mental map of users.

  16. The Complexity of Developing Properly Trained Education Professionals for African American Children: Exploring an African Indigenous Socialization Process

    ERIC Educational Resources Information Center

    Shockley, Kmt G.

    2011-01-01

    African centered educationists view the problems that Black children are facing in schools as a part of the disenfranchisement and disorganization of the Black community at large. In that vein, they do not believe that the problems which Black children are experiencing in America's public (and many private) schools are solvable by taking them out…

  17. Polynomial complexity despite the fermionic sign

    NASA Astrophysics Data System (ADS)

    Rossi, R.; Prokof'ev, N.; Svistunov, B.; Van Houcke, K.; Werner, F.

    2017-04-01

    It is commonly believed that in unbiased quantum Monte Carlo approaches to fermionic many-body problems, the infamous sign problem generically implies prohibitively large computational times for obtaining thermodynamic-limit quantities. We point out that for convergent Feynman diagrammatic series evaluated with a recently introduced Monte Carlo algorithm (see Rossi R., arXiv:1612.05184), the computational time increases only polynomially with the inverse error on thermodynamic-limit quantities.

  18. The quantum limit for gravitational-wave detectors and methods of circumventing it

    NASA Technical Reports Server (NTRS)

    Thorne, K. S.; Caves, C. M.; Sandberg, V. D.; Zimmermann, M.; Drever, R. W. P.

    1979-01-01

    The Heisenberg uncertainty principle prevents the monitoring of the complex amplitude of a mechanical oscillator more accurately than a certain limit value. This 'quantum limit' is a serious obstacle to the achievement of a 10 to the -21st gravitational-wave detection sensitivity. This paper examines the principles of the back-action evasion technique and finds that this technique may be able to overcome the problem of the quantum limit. Back-action evasion does not solve, however, other problems of detection, such as weak coupling, large amplifier noise, and large Nyquist noise.

  19. The U.S. Army Functional Concept for Intelligence 2020-2040

    DTIC Science & Technology

    2017-02-01

    Soldiers to mitigate many complex problems of the future OE. Improved or new analytic processes will use very large data sets to address emerging...increasing. Army collection against publically available data sources may offer insights to social interconnectedness, political dynamics and complex... data used to support situational understanding. (5) Uncertainty and rapid change elevate the analytic risk associated with decision making and

  20. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  1. Large neighborhood search for the double traveling salesman problem with multiple stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell W; Van Hentenryck, Pascal

    This paper considers a complex real-life short-haul/long haul pickup and delivery application. The problem can be modeled as double traveling salesman problem (TSP) in which the pickups and the deliveries happen in the first and second TSPs respectively. Moreover, the application features multiple stacks in which the items must be stored and the pickups and deliveries must take place in reserve (LIFO) order for each stack. The goal is to minimize the total travel time satisfying these constraints. This paper presents a large neighborhood search (LNS) algorithm which improves the best-known results on 65% of the available instances and ismore » always within 2% of the best-known solutions.« less

  2. NMESys: An expert system for network fault detection

    NASA Technical Reports Server (NTRS)

    Nelson, Peter C.; Warpinski, Janet

    1991-01-01

    The problem of network management is becoming an increasingly difficult and challenging task. It is very common today to find heterogeneous networks consisting of many different types of computers, operating systems, and protocols. The complexity of implementing a network with this many components is difficult enough, while the maintenance of such a network is an even larger problem. A prototype network management expert system, NMESys, implemented in the C Language Integrated Production System (CLIPS). NMESys concentrates on solving some of the critical problems encountered in managing a large network. The major goal of NMESys is to provide a network operator with an expert system tool to quickly and accurately detect hard failures, potential failures, and to minimize or eliminate user down time in a large network.

  3. Direct heuristic dynamic programming for damping oscillations in a large power system.

    PubMed

    Lu, Chao; Si, Jennie; Xie, Xiaorong

    2008-08-01

    This paper applies a neural-network-based approximate dynamic programming method, namely, the direct heuristic dynamic programming (direct HDP), to a large power system stability control problem. The direct HDP is a learning- and approximation-based approach to addressing nonlinear coordinated control under uncertainty. One of the major design parameters, the controller learning objective function, is formulated to directly account for network-wide low-frequency oscillation with the presence of nonlinearity, uncertainty, and coupling effect among system components. Results include a novel learning control structure based on the direct HDP with applications to two power system problems. The first case involves static var compensator supplementary damping control, which is used to provide a comprehensive evaluation of the learning control performance. The second case aims at addressing a difficult complex system challenge by providing a new solution to a large interconnected power network oscillation damping control problem that frequently occurs in the China Southern Power Grid.

  4. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data.

    PubMed

    Won, Sungho; Choi, Hosik; Park, Suyeon; Lee, Juyoung; Park, Changyi; Kwon, Sunghoon

    2015-01-01

    Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  5. Closed-Loop Control of Complex Networks: A Trade-Off between Time and Energy

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Zheng; Leng, Si-Yang; Lai, Ying-Cheng; Grebogi, Celso; Lin, Wei

    2017-11-01

    Controlling complex nonlinear networks is largely an unsolved problem at the present. Existing works focus either on open-loop control strategies and their energy consumptions or on closed-loop control schemes with an infinite-time duration. We articulate a finite-time, closed-loop controller with an eye toward the physical and mathematical underpinnings of the trade-off between the control time and energy as well as their dependence on the network parameters and structure. The closed-loop controller is tested on a large number of real systems including stem cell differentiation, food webs, random ecosystems, and spiking neuronal networks. Our results represent a step forward in developing a rigorous and general framework to control nonlinear dynamical networks with a complex topology.

  6. High-frequency CAD-based scattering model: SERMAT

    NASA Astrophysics Data System (ADS)

    Goupil, D.; Boutillier, M.

    1991-09-01

    Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.

  7. Advanced computer architecture for large-scale real-time applications.

    DOT National Transportation Integrated Search

    1973-04-01

    Air traffic control automation is identified as a crucial problem which provides a complex, real-time computer application environment. A novel computer architecture in the form of a pipeline associative processor is conceived to achieve greater perf...

  8. Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ilbeigi, Shahab; Chelidze, David

    2017-11-01

    Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.

  9. Large Spatial and Temporal Separations of Cause and Effect in Policy Making - Dealing with Non-linear Effects

    NASA Astrophysics Data System (ADS)

    McCaskill, John

    There can be large spatial and temporal separation of cause and effect in policy making. Determining the correct linkage between policy inputs and outcomes can be highly impractical in the complex environments faced by policy makers. In attempting to see and plan for the probable outcomes, standard linear models often overlook, ignore, or are unable to predict catastrophic events that only seem improbable due to the issue of multiple feedback loops. There are several issues with the makeup and behaviors of complex systems that explain the difficulty many mathematical models (factor analysis/structural equation modeling) have in dealing with non-linear effects in complex systems. This chapter highlights those problem issues and offers insights to the usefulness of ABM in dealing with non-linear effects in complex policy making environments.

  10. Computational complexity of ecological and evolutionary spatial dynamics

    PubMed Central

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.

    2015-01-01

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569

  11. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  12. Attentional bias induced by solving simple and complex addition and subtraction problems.

    PubMed

    Masson, Nicolas; Pesenti, Mauro

    2014-01-01

    The processing of numbers has been shown to induce shifts of spatial attention in simple probe detection tasks, with small numbers orienting attention to the left and large numbers to the right side of space. Recently, the investigation of this spatial-numerical association has been extended to mental arithmetic with the hypothesis that solving addition or subtraction problems may induce attentional displacements (to the right and to the left, respectively) along a mental number line onto which the magnitude of the numbers would range from left to right, from small to large numbers. Here we investigated such attentional shifts using a target detection task primed by arithmetic problems in healthy participants. The constituents of the addition and subtraction problems (first operand; operator; second operand) were flashed sequentially in the centre of a screen, then followed by a target on the left or the right side of the screen, which the participants had to detect. This paradigm was employed with arithmetic facts (Experiment 1) and with more complex arithmetic problems (Experiment 2) in order to assess the effects of the operation, the magnitude of the operands, the magnitude of the results, and the presence or absence of a requirement for the participants to carry or borrow numbers. The results showed that arithmetic operations induce some spatial shifts of attention, possibly through a semantic link between the operation and space.

  13. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  14. Incorporation of large guest molecules into liposomes via chemical reactions in lipid membranes.

    PubMed

    Tsuchiya, Yuki; Sugikawa, Kouta; Ueda, Masafumi; Ikeda, Atsushi

    2017-02-22

    The incorporation of hydrophobic guest molecules into lipid membranes by the exchange of the guest molecule from a cyclodextrin (CDx) complex to a liposome is limited to guest molecules that can be included in CDxs. To solve this problem, large guest molecules were incorporated into liposomes by chemical reactions of guest molecules in lipid membranes. Stable lipid-membrane-incorporated fullerene derivatives with large substituent(s) were prepared by Diels-Alder reactions in lipid membranes.

  15. Associating optical measurements and estimating orbits of geocentric objects with a Genetic Algorithm: performance limitations.

    NASA Astrophysics Data System (ADS)

    Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent

    2016-07-01

    Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.

  16. Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat

    1993-01-01

    The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.

  17. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernán A.

    2015-08-01

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  18. Influence maximization in complex networks through optimal percolation.

    PubMed

    Morone, Flaviano; Makse, Hernán A

    2015-08-06

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  19. Lessons Learned from Crowdsourcing Complex Engineering Tasks.

    PubMed

    Staffelbach, Matthew; Sempolinski, Peter; Kijewski-Correa, Tracy; Thain, Douglas; Wei, Daniel; Kareem, Ahsan; Madey, Gregory

    2015-01-01

    Crowdsourcing is the practice of obtaining needed ideas, services, or content by requesting contributions from a large group of people. Amazon Mechanical Turk is a web marketplace for crowdsourcing microtasks, such as answering surveys and image tagging. We explored the limits of crowdsourcing by using Mechanical Turk for a more complicated task: analysis and creation of wind simulations. Our investigation examined the feasibility of using crowdsourcing for complex, highly technical tasks. This was done to determine if the benefits of crowdsourcing could be harnessed to accurately and effectively contribute to solving complex real world engineering problems. Of course, untrained crowds cannot be used as a mere substitute for trained expertise. Rather, we sought to understand how crowd workers can be used as a large pool of labor for a preliminary analysis of complex data. We compared the skill of the anonymous crowd workers from Amazon Mechanical Turk with that of civil engineering graduate students, making a first pass at analyzing wind simulation data. For the first phase, we posted analysis questions to Amazon crowd workers and to two groups of civil engineering graduate students. A second phase of our experiment instructed crowd workers and students to create simulations on our Virtual Wind Tunnel website to solve a more complex task. With a sufficiently comprehensive tutorial and compensation similar to typical crowd-sourcing wages, we were able to enlist crowd workers to effectively complete longer, more complex tasks with competence comparable to that of graduate students with more comprehensive, expert-level knowledge. Furthermore, more complex tasks require increased communication with the workers. As tasks become more complex, the employment relationship begins to become more akin to outsourcing than crowdsourcing. Through this investigation, we were able to stretch and explore the limits of crowdsourcing as a tool for solving complex problems.

  20. Path changing methods applied to the 4-D guidance of STOL aircraft.

    DOT National Transportation Integrated Search

    1971-11-01

    Prior to the advent of large-scale commercial STOL service, some challenging navigation and guidance problems must be solved. Proposed terminal area operations may require that these aircraft be capable of accurately flying complex flight paths, and ...

  1. Oak Regeneration: A Knowledge Synthesis

    Treesearch

    H. Michael Rauscher; David L. Loftis; Charles E. McGee; Christopher V. Worth

    1997-01-01

    This scientific literature is represented by a hypertext software. To view this literature you must download and install the hypertext software.Abstract: The scientific literature concerning oak regeneration problems is lengthy, complex, paradoxical, and often perplexing. Despite a large scientific literature and numerous conference...

  2. Factors influencing efficient structure of fuel and energy complex

    NASA Astrophysics Data System (ADS)

    Sidorova, N. G.; Novikova, S. A.

    2017-10-01

    The development of the Russian fuel-energy complex is a priority for the national economic policy, and the Far East is a link between Russia and the Asia-Pacific region. Large-scale engineering of numerous resources of the Far East will force industrial development, increase living standard and strengthen Russia’s position in the global energy market. So, revealing the factors which influence rational structure of the fuel-energy complex is very urgent nowadays. With the use of depth analysis of development tendencies of the complex and its problems the authors show ways of its efficiency improvement.

  3. Research directions in large scale systems and decentralized control

    NASA Technical Reports Server (NTRS)

    Tenney, R. R.

    1980-01-01

    Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.

  4. A numerical approach for simulating fluid structure interaction of flexible thin shells undergoing arbitrarily large deformations in complex domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilmanov, Anvar, E-mail: agilmano@umn.edu; Le, Trung Bao, E-mail: lebao002@umn.edu; Sotiropoulos, Fotis, E-mail: fotis@umn.edu

    We present a new numerical methodology for simulating fluid–structure interaction (FSI) problems involving thin flexible bodies in an incompressible fluid. The FSI algorithm uses the Dirichlet–Neumann partitioning technique. The curvilinear immersed boundary method (CURVIB) is coupled with a rotation-free finite element (FE) model for thin shells enabling the efficient simulation of FSI problems with arbitrarily large deformation. Turbulent flow problems are handled using large-eddy simulation with the dynamic Smagorinsky model in conjunction with a wall model to reconstruct boundary conditions near immersed boundaries. The CURVIB and FE solvers are coupled together on the flexible solid–fluid interfaces where the structural nodalmore » positions, displacements, velocities and loads are calculated and exchanged between the two solvers. Loose and strong coupling FSI schemes are employed enhanced by the Aitken acceleration technique to ensure robust coupling and fast convergence especially for low mass ratio problems. The coupled CURVIB-FE-FSI method is validated by applying it to simulate two FSI problems involving thin flexible structures: 1) vortex-induced vibrations of a cantilever mounted in the wake of a square cylinder at different mass ratios and at low Reynolds number; and 2) the more challenging high Reynolds number problem involving the oscillation of an inverted elastic flag. For both cases the computed results are in excellent agreement with previous numerical simulations and/or experiential measurements. Grid convergence tests/studies are carried out for both the cantilever and inverted flag problems, which show that the CURVIB-FE-FSI method provides their convergence. Finally, the capability of the new methodology in simulations of complex cardiovascular flows is demonstrated by applying it to simulate the FSI of a tri-leaflet, prosthetic heart valve in an anatomic aorta and under physiologic pulsatile conditions.« less

  5. Solving optimization problems on computational grids.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, S. J.; Mathematics and Computer Science

    2001-05-01

    Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms havemore » become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software infrastructure need to solve these problems on computational grids. This article describes some of the results we have obtained during the first three years of the metaneos project. Our efforts have led to development of the runtime support library MW for implementing algorithms with master-worker control structure on Condor platforms. This work is discussed here, along with work on algorithms and codes for integer linear programming, the quadratic assignment problem, and stochastic linear programmming. Our experiences in the metaneos project have shown that cheap, powerful computational grids can be used to tackle large optimization problems of various types. In an industrial or commercial setting, the results demonstrate that one may not have to buy powerful computational servers to solve many of the large problems arising in areas such as scheduling, portfolio optimization, or logistics; the idle time on employee workstations (or, at worst, an investment in a modest cluster of PCs) may do the job. For the optimization research community, our results motivate further work on parallel, grid-enabled algorithms for solving very large problems of other types. The fact that very large problems can be solved cheaply allows researchers to better understand issues of 'practical' complexity and of the role of heuristics.« less

  6. Rule-based modeling and simulations of the inner kinetochore structure.

    PubMed

    Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar

    2013-09-01

    Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. WE-D-303-00: Computational Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA

    2015-06-15

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less

  8. Practical problems in aggregating expert opinions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booker, J.M.; Picard, R.R.; Meyer, M.A.

    1993-11-01

    Expert opinion is data given by a qualified person in response to a technical question. In these analyses, expert opinion provides information where other data are either sparse or non-existent. Improvements in forecasting result from the advantageous addition of expert opinion to observed data in many areas, such as meteorology and econometrics. More generally, analyses of large, complex systems often involve experts on various components of the system supplying input to a decision process; applications include such wide-ranging areas as nuclear reactor safety, management science, and seismology. For large or complex applications, no single expert may be knowledgeable enough aboutmore » the entire application. In other problems, decision makers may find it comforting that a consensus or aggregation of opinions is usually better than a single opinion. Many risk and reliability studies require a single estimate for modeling, analysis, reporting, and decision making purposes. For problems with large uncertainties, the strategy of combining as diverse a set of experts as possible hedges against underestimation of that uncertainty. Decision makers are frequently faced with the task of selecting the experts and combining their opinions. However, the aggregation is often the responsibility of an analyst. Whether the decision maker or the analyst does the aggregation, the input for it, such as providing weights for experts or estimating other parameters, is imperfect owing to a lack of omniscience. Aggregation methods for expert opinions have existed for over thirty years; yet many of the difficulties with their use remain unresolved. The bulk of these problem areas are summarized in the sections that follow: sensitivities of results to assumptions, weights for experts, correlation of experts, and handling uncertainties. The purpose of this paper is to discuss the sources of these problems and describe their effects on aggregation.« less

  9. Classification of time series patterns from complex dynamic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Rao, N.

    1998-07-01

    An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately,more » the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.« less

  10. Investigation of finite element: ABC methods for electromagnetic field simulation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chatterjee, A.; Volakis, John L.; Nguyen, J.

    1994-01-01

    The mechanics of wave propagation in the presence of obstacles is of great interest in many branches of engineering and applied mathematics like electromagnetics, fluid dynamics, geophysics, seismology, etc. Such problems can be broadly classified into two categories: the bounded domain or the closed problem and the unbounded domain or the open problem. Analytical techniques have been derived for the simpler problems; however, the need to model complicated geometrical features, complex material coatings and fillings, and to adapt the model to changing design parameters have inevitably tilted the balance in favor of numerical techniques. The modeling of closed problems presents difficulties primarily in proper meshing of the interior region. However, problems in unbounded domains pose a unique challenge to computation, since the exterior region is inappropriate for direct implementation of numerical techniques. A large number of solutions have been proposed but only a few have stood the test of time and experiment. The goal of this thesis is to develop an efficient and reliable partial differential equation technique to model large three dimensional scattering problems in electromagnetics.

  11. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  12. Supporting Knowledge Transfer in IS Deployment Projects

    NASA Astrophysics Data System (ADS)

    Schönström, Mikael

    To deploy new information systems is an expensive and complex task, and does seldom result in successful usage where the system adds strategic value to the firm (e.g. Sharma et al. 2003). It has been argued that innovation diffusion is a knowledge integration problem (Newell et al. 2000). Knowledge about business processes, deployment processes, information systems and technology are needed in a large-scale deployment of a corporate IS. These deployments can therefore to a large extent be argued to be a knowledge management (KM) problem. An effective deployment requires that knowledge about the system is effectively transferred to the target organization (Ko et al. 2005).

  13. Component mode synthesis and large deflection vibration of complex structures. Volume 1: Examples of NASTRAN modal synthesis capability

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Shen, Mo-How

    1987-01-01

    The use of NASTRAN model synthesis capability is illustrated. A classical truss problem is examined and the results are compared to results from other methods to test for accuracy. The problem is examined using both fixed interface modes and free interface modes. The solution is carried out for an applied dynamic load as far as recovery of forces in individual members as a function of time. Another small beam problem is used to compare different means of combining substructures.

  14. Network Access Control List Situation Awareness

    ERIC Educational Resources Information Center

    Reifers, Andrew

    2010-01-01

    Network security is a large and complex problem being addressed by multiple communities. Nevertheless, current theories in networking security appear to overestimate network administrators' ability to understand network access control lists (NACLs), providing few context specific user analyses. Consequently, the current research generally seems to…

  15. Risk prioritisation using the analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Sum, Rabihah Md.

    2015-12-01

    This study demonstrated how to use the Analytic Hierarchy Process (AHP) to prioritise risks of an insurance company. AHP is a technique to structure complex problems by arranging elements of the problems in a hierarchy, assigning numerical values to subjective judgements on the relative importance of the elements and synthesizing the judgements to determine which elements have the highest priority. The study is motivated by wide application of AHP as a prioritisation technique in complex problems. It aims to show AHP is able to minimise some limitations of risk assessment technique using likelihood and impact. The study shows AHP is able to provide consistency check on subjective judgements, organise a large number of risks into a structured framework, assist risk managers to make explicit risk trade-offs, and provide an easy to understand and systematic risk assessment process.

  16. Adaptive simplification of complex multiscale systems.

    PubMed

    Chiavazzo, Eliodoro; Karlin, Ilya

    2011-03-01

    A fully adaptive methodology is developed for reducing the complexity of large dissipative systems. This represents a significant step toward extracting essential physical knowledge from complex systems, by addressing the challenging problem of a minimal number of variables needed to exactly capture the system dynamics. Accurate reduced description is achieved, by construction of a hierarchy of slow invariant manifolds, with an embarrassingly simple implementation in any dimension. The method is validated with the autoignition of the hydrogen-air mixture where a reduction to a cascade of slow invariant manifolds is observed.

  17. The problem of complex eigensystems in the semianalytical solution for advancement of time in solute transport simulations: a new method using real arithmetic

    USGS Publications Warehouse

    Umari, Amjad M.J.; Gorelick, Steven M.

    1986-01-01

    In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.

  18. Applied mathematical problems in modern electromagnetics

    NASA Astrophysics Data System (ADS)

    Kriegsman, Gregory

    1994-05-01

    We have primarily investigated two classes of electromagnetic problems. The first contains the quantitative description of microwave heating of dispersive and conductive materials. Such problems arise, for example, when biological tissue are exposed, accidentally or purposefully, to microwave radiation. Other instances occur in ceramic processing, such as sintering and microwave assisted chemical vapor infiltration and other industrial drying processes, such as the curing of paints and concrete. The second class characterizes the scattering of microwaves by complex targets which possess two or more disparate length and/or time scales. Spatially complex scatterers arise in a variety of applications, such as large gratings and slowly changing guiding structures. The former are useful in developing microstrip energy couplers while the later can be used to model anatomical subsystems (e.g., the open guiding structure composed of two legs and the adjoining lower torso). Temporally complex targets occur in applications involving dispersive media whose relaxation times differ by orders of magnitude from thermal and/or electromagnetic time scales. For both cases the mathematical description of the problems gives rise to complicated ill-conditioned boundary value problems, whose accurate solutions require a blend of both asymptotic techniques, such as multiscale methods and matched asymptotic expansions, and numerical methods incorporating radiation boundary conditions, such as finite differences and finite elements.

  19. Towards communication-efficient quantum oblivious key distribution

    NASA Astrophysics Data System (ADS)

    Panduranga Rao, M. V.; Jakobi, M.

    2013-01-01

    Symmetrically private information retrieval, a fundamental problem in the field of secure multiparty computation, is defined as follows: A database D of N bits held by Bob is queried by a user Alice who is interested in the bit Db in such a way that (1) Alice learns Db and only Db and (2) Bob does not learn anything about Alice's choice b. While solutions to this problem in the classical domain rely largely on unproven computational complexity theoretic assumptions, it is also known that perfect solutions that guarantee both database and user privacy are impossible in the quantum domain. Jakobi [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.83.022301 83, 022301 (2011)] proposed a protocol for oblivious transfer using well-known quantum key device (QKD) techniques to establish an oblivious key to solve this problem. Their solution provided a good degree of database and user privacy (using physical principles like the impossibility of perfectly distinguishing nonorthogonal quantum states and the impossibility of superluminal communication) while being loss-resistant and implementable with commercial QKD devices (due to the use of the Scarani-Acin-Ribordy-Gisin 2004 protocol). However, their quantum oblivious key distribution (QOKD) protocol requires a communication complexity of O(NlogN). Since modern databases can be extremely large, it is important to reduce this communication as much as possible. In this paper, we first suggest a modification of their protocol wherein the number of qubits that need to be exchanged is reduced to O(N). A subsequent generalization reduces the quantum communication complexity even further in such a way that only a few hundred qubits are needed to be transferred even for very large databases.

  20. MDTS: automatic complex materials design using Monte Carlo tree search.

    PubMed

    M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-01-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  1. MDTS: automatic complex materials design using Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Dieb, Thaer M.; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-12-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  2. ARPA surveillance technology for detection of targets hidden in foliage

    NASA Astrophysics Data System (ADS)

    Hoff, Lawrence E.; Stotts, Larry B.

    1994-02-01

    The processing of large quantities of synthetic aperture radar data in real time is a complex problem. Even the image formation process taxes today's most advanced computers. The use of complex algorithms with multiple channels adds another dimension to the computational problem. Advanced Research Projects Agency (ARPA) is currently planning on using the Paragon parallel processor for this task. The Paragon is small enough to allow its use in a sensor aircraft. Candidate algorithms will be implemented on the Paragon for evaluation for real time processing. In this paper ARPA technology developments for detecting targets hidden in foliage are reviewed and examples of signal processing techniques on field collected data are presented.

  3. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  4. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  5. Midbond basis functions for weakly bound complexes

    NASA Astrophysics Data System (ADS)

    Shaw, Robert A.; Hill, J. Grant

    2018-06-01

    Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.

  6. Conducting Automated Test Assembly Using the Premium Solver Platform Version 7.0 with Microsoft Excel and the Large-Scale LP/QP Solver Engine Add-In

    ERIC Educational Resources Information Center

    Cor, Ken; Alves, Cecilia; Gierl, Mark J.

    2008-01-01

    This review describes and evaluates a software add-in created by Frontline Systems, Inc., that can be used with Microsoft Excel 2007 to solve large, complex test assembly problems. The combination of Microsoft Excel 2007 with the Frontline Systems Premium Solver Platform is significant because Microsoft Excel is the most commonly used spreadsheet…

  7. Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.

  8. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  9. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  10. Program Helps Decompose Complex Design Systems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Hall, Laura E.

    1995-01-01

    DeMAID (Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problems such as large platforms in outer space. Groups modular subsystems on basis of interactions among them. Saves considerable amount of money and time in total design process, particularly in new design problem in which order of modules has not been defined. Originally written for design problems, also applicable to problems containing modules (processes) that take inputs and generate outputs. Available in three machine versions: Macintosh written in Symantec's Think C 3.01, Sun, and SGI IRIS in C language.

  11. Large-eddy simulation of a boundary layer with concave streamwise curvature

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.

    1994-01-01

    Turbulence modeling continues to be one of the most difficult problems in fluid mechanics. Existing prediction methods are well developed for certain classes of simple equilibrium flows, but are still not entirely satisfactory for a large category of complex non-equilibrium flows found in engineering practice. Direct and large-eddy simulation (LES) approaches have long been believed to have great potential for the accurate prediction of difficult turbulent flows, but the associated computational cost has been prohibitive for practical problems. This remains true for direct simulation but is no longer clear for large-eddy simulation. Advances in computer hardware, numerical methods, and subgrid-scale modeling have made it possible to conduct LES for flows or practical interest at Reynolds numbers in the range of laboratory experiments. The objective of this work is to apply ES and the dynamic subgrid-scale model to the flow of a boundary layer over a concave surface.

  12. Low-Complexity User Selection for Rate Maximization in MIMO Broadcast Channels with Downlink Beamforming

    PubMed Central

    Silva, Adão; Gameiro, Atílio

    2014-01-01

    We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928

  13. Master-slave system with force feedback based on dynamics of virtual model

    NASA Technical Reports Server (NTRS)

    Nojima, Shuji; Hashimoto, Hideki

    1994-01-01

    A master-slave system can extend manipulating and sensing capabilities of a human operator to a remote environment. But the master-slave system has two serious problems: one is the mechanically large impedance of the system; the other is the mechanical complexity of the slave for complex remote tasks. These two problems reduce the efficiency of the system. If the slave has local intelligence, it can help the human operator by using its good points like fast calculation and large memory. The authors suggest that the slave is a dextrous hand with many degrees of freedom able to manipulate an object of known shape. It is further suggested that the dimensions of the remote work space be shared by the human operator and the slave. The effect of the large impedance of the system can be reduced in a virtual model, a physical model constructed in a computer with physical parameters as if it were in the real world. A method to determine the damping parameter dynamically for the virtual model is proposed. Experimental results show that this virtual model is better than the virtual model with fixed damping.

  14. A review on recent contribution of meshfree methods to structure and fracture mechanics applications.

    PubMed

    Daxini, S D; Prajapati, J M

    2014-01-01

    Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.

  15. Handling Practicalities in Agricultural Policy Optimization for Water Quality Improvements

    EPA Science Inventory

    Bilevel and multi-objective optimization methods are often useful to spatially target agri-environmental policy throughout a watershed. This type of problem is complex and is comprised of a number of practicalities: (i) a large number of decision variables, (ii) at least two inte...

  16. Structural Equation Modeling of School Violence Data: Methodological Considerations

    ERIC Educational Resources Information Center

    Mayer, Matthew J.

    2004-01-01

    Methodological challenges associated with structural equation modeling (SEM) and structured means modeling (SMM) in research on school violence and related topics in the social and behavioral sciences are examined. Problems associated with multiyear implementations of large-scale surveys are discussed. Complex sample designs, part of any…

  17. USSR and Eastern Europe Scientific Abstracts, Engineering and Equipment. Number 25.

    DTIC Science & Technology

    1976-10-29

    is necessary to consider the problem of diffraction at a_cylindrical cavity. Some methods of solving this problem become very un- wieldy, when...applied to such a cavity of large wave dimensions, even with the aid of a digital computer. In the simpler Watson method , the series represent- ing the...potential of cylindrical waves is transformed to an integral in the complex plane and evaluated as the sum of residues. A difficulty in this method

  18. Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos

    DTIC Science & Technology

    2009-05-01

    instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors

  19. Dynamic optimization of chemical processes using ant colony framework.

    PubMed

    Rajesh, J; Gupta, K; Kusumakar, H S; Jayaraman, V K; Kulkarni, B D

    2001-11-01

    Ant colony framework is illustrated by considering dynamic optimization of six important bench marking examples. This new computational tool is simple to implement and can tackle problems with state as well as terminal constraints in a straightforward fashion. It requires fewer grid points to reach the global optimum at relatively very low computational effort. The examples with varying degree of complexities, analyzed here, illustrate its potential for solving a large class of process optimization problems in chemical engineering.

  20. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  1. WE-D-303-01: Development and Application of Digital Human Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segars, P.

    2015-06-15

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less

  2. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  3. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  4. Rotor dynamic considerations for large wind power generator systems

    NASA Technical Reports Server (NTRS)

    Ormiston, R. A.

    1973-01-01

    Successful large, reliable, low maintenance wind turbines must be designed with full consideration for minimizing dynamic response to aerodynamic, inertial, and gravitational forces. Much of existing helicopter rotor technology is applicable to this problem. Compared with helicopter rotors, large wind turbines are likely to be relatively less flexible with higher dimensionless natural frequencies. For very large wind turbines, low power output per unit weight and stresses due to gravitational forces are limiting factors. The need to reduce rotor complexity to a minimum favors the use of cantilevered (hingeless) rotor configurations where stresses are relieved by elastic deformations.

  5. Diagnostic imaging learning resources evaluated by students and recent graduates.

    PubMed

    Alexander, Kate; Bélisle, Marilou; Dallaire, Sébastien; Fernandez, Nicolas; Doucet, Michèle

    2013-01-01

    Many learning resources can help students develop the problem-solving abilities and clinical skills required for diagnostic imaging. This study explored veterinary students' perceptions of the usefulness of a variety of learning resources. Perceived resource usefulness was measured for different levels of students and for academic versus clinical preparation. Third-year (n=139) and final (fifth) year (n=105) students and recent graduates (n=56) completed questionnaires on perceived usefulness of each resource. Resources were grouped for comparison: abstract/low complexity (e.g., notes, multimedia presentations), abstract/high complexity (e.g., Web-based and film case repositories), concrete/low complexity (e.g., large-group "clicker" workshops), and concrete/high complexity (e.g., small-group interpretation workshops). Lower-level students considered abstract/low-complexity resources more useful for academic preparation and concrete resources more useful for clinical preparation. Higher-level students/recent graduates also considered abstract/low-complexity resources more useful for academic preparation. For all levels, lecture notes were considered highly useful. Multimedia slideshows were an interactive complement to notes. The usefulness of a Web-based case repository was limited by accessibility problems and difficulty. Traditional abstract/low-complexity resources were considered useful for more levels and contexts than expected. Concrete/high-complexity resources need to better represent clinical practice to be considered more useful for clinical preparation.

  6. Combined Parameter and State Estimation Problem in a Complex Domain: RF Hyperthermia Treatment Using Nanoparticles

    NASA Astrophysics Data System (ADS)

    Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.

    2016-09-01

    The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.

  7. Complexity transitions in global algorithms for sparse linear systems over finite fields

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.

    2002-09-01

    We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.

  8. Lessons for Staff Developers from an Organization Development Intervention.

    ERIC Educational Resources Information Center

    Conway, James A.

    1990-01-01

    A case study of an organization development intervention in a large New York State school district describes to staff developers the complex process of discovering and responding to organizational needs. The discussion focuses on understanding the problem; frameworks for diagnosis and intervention; and implementing the intervention strategy.…

  9. Unified Approximations: A New Approach for Monoprotic Weak Acid-Base Equilibria

    ERIC Educational Resources Information Center

    Pardue, Harry; Odeh, Ihab N.; Tesfai, Teweldemedhin M.

    2004-01-01

    The unified approximations reduce the conceptual complexity by combining solutions for a relatively large number of different situations into just two similar sets of processes. Processes used to solve problems by either the unified or classical approximations require similar degrees of understanding of the underlying chemical processes.

  10. Estimating the Local Size and Coverage of Interaction Network Regions

    ERIC Educational Resources Information Center

    Eagle, Michael; Barnes, Tiffany

    2015-01-01

    Interactive problem solving environments, such as intelligent tutoring systems and educational video games, produce large amounts of transactional data which make it a challenge for both researchers and educators to understand how students work within the environment. Researchers have modeled the student-tutor interactions using complex network…

  11. Heuristic Evaluation of Three Jordanian University Websites

    ERIC Educational Resources Information Center

    Hasan, Layla

    2013-01-01

    Generally, universities have complex and large websites, which include a collection of many sub-sites related to the different parts of universities (e.g. registration unit, faculties, departments). Managers of academic institutions and educational websites need to know types of usability problems that could be found on their websites. This would…

  12. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  13. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  14. Deformable segmentation via sparse representation and dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  16. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  17. Lessons Learned from Crowdsourcing Complex Engineering Tasks

    PubMed Central

    Kijewski-Correa, Tracy; Thain, Douglas; Kareem, Ahsan; Madey, Gregory

    2015-01-01

    Crowdsourcing Crowdsourcing is the practice of obtaining needed ideas, services, or content by requesting contributions from a large group of people. Amazon Mechanical Turk is a web marketplace for crowdsourcing microtasks, such as answering surveys and image tagging. We explored the limits of crowdsourcing by using Mechanical Turk for a more complicated task: analysis and creation of wind simulations. Harnessing Crowdworkers for Engineering Our investigation examined the feasibility of using crowdsourcing for complex, highly technical tasks. This was done to determine if the benefits of crowdsourcing could be harnessed to accurately and effectively contribute to solving complex real world engineering problems. Of course, untrained crowds cannot be used as a mere substitute for trained expertise. Rather, we sought to understand how crowd workers can be used as a large pool of labor for a preliminary analysis of complex data. Virtual Wind Tunnel We compared the skill of the anonymous crowd workers from Amazon Mechanical Turk with that of civil engineering graduate students, making a first pass at analyzing wind simulation data. For the first phase, we posted analysis questions to Amazon crowd workers and to two groups of civil engineering graduate students. A second phase of our experiment instructed crowd workers and students to create simulations on our Virtual Wind Tunnel website to solve a more complex task. Conclusions With a sufficiently comprehensive tutorial and compensation similar to typical crowd-sourcing wages, we were able to enlist crowd workers to effectively complete longer, more complex tasks with competence comparable to that of graduate students with more comprehensive, expert-level knowledge. Furthermore, more complex tasks require increased communication with the workers. As tasks become more complex, the employment relationship begins to become more akin to outsourcing than crowdsourcing. Through this investigation, we were able to stretch and explore the limits of crowdsourcing as a tool for solving complex problems. PMID:26383029

  18. An unbalanced spectra classification method based on entropy

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  19. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  20. Pacemakers in large arrays of oscillators with nonlocal coupling

    NASA Astrophysics Data System (ADS)

    Jaramillo, Gabriela; Scheel, Arnd

    2016-02-01

    We model pacemaker effects of an algebraically localized heterogeneity in a 1 dimensional array of oscillators with nonlocal coupling. We assume the oscillators obey simple phase dynamics and that the array is large enough so that it can be approximated by a continuous nonlocal evolution equation. We concentrate on the case of heterogeneities with positive average and show that steady solutions to the nonlocal problem exist. In particular, we show that these heterogeneities act as a wave source. This effect is not possible in 3 dimensional systems, such as the complex Ginzburg-Landau equation, where the wavenumber of weak sources decays at infinity. To obtain our results we use a series of isomorphisms to relate the nonlocal problem to the viscous eikonal equation. We then use Fredholm properties of the Laplace operator in Kondratiev spaces to obtain solutions to the eikonal equation, and by extension to the nonlocal problem.

  1. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  2. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  3. Modeling the behaviour of shape memory materials under large deformations

    NASA Astrophysics Data System (ADS)

    Rogovoy, A. A.; Stolbova, O. S.

    2017-06-01

    In this study, the models describing the behavior of shape memory alloys, ferromagnetic materials and polymers have been constructed, using a formalized approach to develop the constitutive equations for complex media under large deformations. The kinematic and constitutive equations, satisfying the principles of thermodynamics and objectivity, have been derived. The application of the Galerkin procedure to the systems of equations of solid mechanics allowed us to obtain the Lagrange variational equation and variational formulation of the magnetostatics problems. These relations have been tested in the context of the problems of finite deformation in shape memory alloys and ferromagnetic materials during forward and reverse martensitic transformations and in shape memory polymers during forward and reverse relaxation transitions from a highly elastic to a glassy state.

  4. Effects of Aromatherapy on Test Anxiety and Performance in College Students

    ERIC Educational Resources Information Center

    Dunnigan, Jocelyn Marie

    2013-01-01

    Test anxiety is a complex, multidimensional construct composed of cognitive, affective, and behavioral components that have been shown to negatively affect test performance. Furthermore, test anxiety is a pervasive problem in modern society largely related to the evaluative nature of educational programs, therefore meriting study of its nature,…

  5. Youth and the Workplace: Second-Chance Programs and the Hard-to-Serve.

    ERIC Educational Resources Information Center

    Smith, Thomas J.; And Others

    The task of addressing the complex and deeply rooted problems faced by the nation's at-risk youth is one that largely falls outside the scope of traditional institutions. Investment in the development and operation of "second-chance" education and employment programs has historically been inadequate, haphazard, and uncertain. The gains…

  6. Integrating ecological and social knowledge: learning from CHANS research

    Treesearch

    Bruce Shindler; Thomas A. Spies; John P. Bolte; Jeffrey D. Kline

    2017-01-01

    Scientists are increasingly called upon to integrate across ecological and social disciplines to tackle complex coupled human and natural system (CHANS) problems. Integration of these disciplines is challenging and many scientists do not have experience with large integrated research projects. However, much can be learned about the complicated process of integration...

  7. Using Reliable Change to Calculate Clinically Significant Progress in Children with EBD: A BHRS Program Evaluation

    ERIC Educational Resources Information Center

    Thoder, Vincent J.; Hesky, James G.; Cautilli, Joseph D.

    2010-01-01

    Children often have complex emotional and behavioral disorders (ADHD, ODD, Depression, PTSD, etc.). A large amount of research exists in the behavioral treatment of children with these disorders regarding specific behavioral problems. Much less research exists for the treatment of comprehensive problematic behaviors that these children experience…

  8. Meeting the Diverse Needs of Students with EBD in Inclusive Science Classrooms

    ERIC Educational Resources Information Center

    Watt, Sarah J.; Therrien, William J.; Kaldenberg, Erica R.

    2014-01-01

    This article reports that for students with emotional and behavioral disorders (EBD), difficulties in core academic skills may play a large role in student success or failure in science. Why? Comprehending science texts can be particularly problematic with their complex and often unknown vocabulary words. Another problem identified as an…

  9. Uncertainty analysis in ecological studies: an overview

    Treesearch

    Harbin Li; Jianguo Wu

    2006-01-01

    Large-scale simulation models are essential tools for scientific research and environmental decision-making because they can be used to synthesize knowledge, predict consequences of potential scenarios, and develop optimal solutions (Clark et al. 2001, Berk et al. 2002, Katz 2002). Modeling is often the only means of addressing complex environmental problems that occur...

  10. Fast and Accurate Detection of Spread Source in Large Complex Networks

    DTIC Science & Technology

    the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the...important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all

  11. Actions, Objectives & Concerns. Human Parameters for Architectural Design.

    ERIC Educational Resources Information Center

    Lasswell, Thomas E.; And Others

    An experiment conducted at California State College, Los Angeles, to test the value of social-psychological research in defining building needs is described. The problems of how to identify and synthesize the disparate objectives, concerns and actions of the groups who use or otherwise have an interest in large and complex buildings is discussed.…

  12. The Relative Efficiency of Two Strategies for Conducting Cognitive Task Analysis

    ERIC Educational Resources Information Center

    Flynn, Catherine L.

    2012-01-01

    Cognitive task analysis (CTA) has evolved over the past half century to capture the mental decisions and analysis that experts have learned to implement when solving complex problems. Since expertise is largely automated and nonconscious, a variety of observation and interview strategies have been developed to identify the most critical cognitive…

  13. Discriminant WSRC for Large-Scale Plant Species Recognition.

    PubMed

    Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong

    2017-01-01

    In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  14. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  15. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  16. The application of dynamic programming in production planning

    NASA Astrophysics Data System (ADS)

    Wu, Run

    2017-05-01

    Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.

  17. Two fast approximate wavelet algorithms for image processing, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Wickerhauser, Mladen V.

    1994-07-01

    We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.

  18. The Daily Operational Brief: Fostering Daily Readiness, Care Coordination, and Problem-Solving Accountability in a Large Pediatric Health Care System.

    PubMed

    Donnelly, Lane F; Basta, Kathryne C; Dykes, Anne M; Zhang, Wei; Shook, Joan E

    2018-01-01

    At a pediatric health system, the Daily Operational Brief (DOB) was updated in 2015 after three years of operation. Quality and safety metrics, the patient volume and staffing assessment, and the readiness assessment are all presented. In addition, in the problem-solving accountability system, problematic issues are categorized as Quick Hits or Complex Issues. Walk-the-Wall, a biweekly meeting attended by hospital senior administrative leadership and quality and safety leaders, is conducted to chart current progress on Complex Issues. The DOB provides a daily standardized approach to evaluate readiness to provide care to current patients and improvement in the care to be provided for future patients. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  19. Lee-Yang zero analysis for the study of QCD phase structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ejiri, Shinji

    2006-03-01

    We comment on the Lee-Yang zero analysis for the study of the phase structure of QCD at high temperature and baryon number density by Monte-Carlo simulations. We find that the sign problem for nonzero density QCD induces a serious problem in the finite volume scaling analysis of the Lee-Yang zeros for the investigation of the order of the phase transition. If the sign problem occurs at large volume, the Lee-Yang zeros will always approach the real axis of the complex parameter plane in the thermodynamic limit. This implies that a scaling behavior which would suggest a crossover transition will notmore » be obtained. To clarify this problem, we discuss the Lee-Yang zero analysis for SU(3) pure gauge theory as a simple example without the sign problem, and then consider the case of nonzero density QCD. It is suggested that the distribution of the Lee-Yang zeros in the complex parameter space obtained by each simulation could be more important information for the investigation of the critical endpoint in the (T,{mu}{sub q}) plane than the finite volume scaling behavior.« less

  20. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  1. P1 Nonconforming Finite Element Method for the Solution of Radiation Transport Problems

    NASA Technical Reports Server (NTRS)

    Kang, Kab S.

    2002-01-01

    The simulation of radiation transport in the optically thick flux-limited diffusion regime has been identified as one of the most time-consuming tasks within large simulation codes. Due to multimaterial complex geometry, the radiation transport system must often be solved on unstructured grids. In this paper, we investigate the behavior and the benefits of the unstructured P(sub 1) nonconforming finite element method, which has proven to be flexible and effective on related transport problems, in solving unsteady implicit nonlinear radiation diffusion problems using Newton and Picard linearization methods. Key words. nonconforrning finite elements, radiation transport, inexact Newton linearization, multigrid preconditioning

  2. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology

    PubMed Central

    Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.

    2017-01-01

    Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442

  3. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology.

    PubMed

    Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R

    2017-01-01

    We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.

  4. Phenomenological theory of collective decision-making

    NASA Astrophysics Data System (ADS)

    Zafeiris, Anna; Koman, Zsombor; Mones, Enys; Vicsek, Tamás

    2017-08-01

    An essential task of groups is to provide efficient solutions for the complex problems they face. Indeed, considerable efforts have been devoted to the question of collective decision-making related to problems involving a single dominant feature. Here we introduce a quantitative formalism for finding the optimal distribution of the group members' competences in the more typical case when the underlying problem is complex, i.e., multidimensional. Thus, we consider teams that are aiming at obtaining the best possible answer to a problem having a number of independent sub-problems. Our approach is based on a generic scheme for the process of evaluating the proposed solutions (i.e., negotiation). We demonstrate that the best performing groups have at least one specialist for each sub-problem - but a far less intuitive result is that finding the optimal solution by the interacting group members requires that the specialists also have some insight into the sub-problems beyond their unique field(s). We present empirical results obtained by using a large-scale database of citations being in good agreement with the above theory. The framework we have developed can easily be adapted to a variety of realistic situations since taking into account the weights of the sub-problems, the opinions or the relations of the group is straightforward. Consequently, our method can be used in several contexts, especially when the optimal composition of a group of decision-makers is designed.

  5. Model correlation and damage location for large space truss structures: Secant method development and evaluation

    NASA Technical Reports Server (NTRS)

    Smith, Suzanne Weaver; Beattie, Christopher A.

    1991-01-01

    On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.

  6. A Kohonen-like decomposition method for the Euclidean traveling salesman problem-KNIES/spl I.bar/DECOMPOSE.

    PubMed

    Aras, N; Altinel, I K; Oommen, J

    2003-01-01

    In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.

  7. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  8. Mexican Hat Wavelet Kernel ELM for Multiclass Classification.

    PubMed

    Wang, Jie; Song, Yi-Fan; Ma, Tian-Lei

    2017-01-01

    Kernel extreme learning machine (KELM) is a novel feedforward neural network, which is widely used in classification problems. To some extent, it solves the existing problems of the invalid nodes and the large computational complexity in ELM. However, the traditional KELM classifier usually has a low test accuracy when it faces multiclass classification problems. In order to solve the above problem, a new classifier, Mexican Hat wavelet KELM classifier, is proposed in this paper. The proposed classifier successfully improves the training accuracy and reduces the training time in the multiclass classification problems. Moreover, the validity of the Mexican Hat wavelet as a kernel function of ELM is rigorously proved. Experimental results on different data sets show that the performance of the proposed classifier is significantly superior to the compared classifiers.

  9. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE PAGES

    Nicholson, Bethany; Siirola, John

    2017-11-11

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  10. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  11. Chapter 3: Isolation of Photosystem II Reaction Center Complexes from Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seibert, M.; Picorel, R.

    2011-01-01

    Methods to isolate and purify 6- and 5-Chl D1/D2/Cyt b559 photosystem II (PSII) reaction center (RC) complexes from plants are presented, and the advantages and disadvantages of each procedure are discussed. One of the simpler 6-Chl procedures and a procedure for isolating 5-Chl complexes are described in detail. Furthermore, a rapid procedure that produces relatively large amounts of less pure 6-Chl material (i.e., more nonpigmented protein) is also described. Criteria to assess the purity of PSII RC preparations are presented, and problems associated with each of the isolation procedures are discussed.

  12. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  13. Combinatorial Approach for Large-scale Identification of Linked Peptides from Tandem Mass Spectrometry Spectra*

    PubMed Central

    Wang, Jian; Anania, Veronica G.; Knott, Jeff; Rush, John; Lill, Jennie R.; Bourne, Philip E.; Bandeira, Nuno

    2014-01-01

    The combination of chemical cross-linking and mass spectrometry has recently been shown to constitute a powerful tool for studying protein–protein interactions and elucidating the structure of large protein complexes. However, computational methods for interpreting the complex MS/MS spectra from linked peptides are still in their infancy, making the high-throughput application of this approach largely impractical. Because of the lack of large annotated datasets, most current approaches do not capture the specific fragmentation patterns of linked peptides and therefore are not optimal for the identification of cross-linked peptides. Here we propose a generic approach to address this problem and demonstrate it using disulfide-bridged peptide libraries to (i) efficiently generate large mass spectral reference data for linked peptides at a low cost and (ii) automatically train an algorithm that can efficiently and accurately identify linked peptides from MS/MS spectra. We show that using this approach we were able to identify thousands of MS/MS spectra from disulfide-bridged peptides through comparison with proteome-scale sequence databases and significantly improve the sensitivity of cross-linked peptide identification. This allowed us to identify 60% more direct pairwise interactions between the protein subunits in the 20S proteasome complex than existing tools on cross-linking studies of the proteasome complexes. The basic framework of this approach and the MS/MS reference dataset generated should be valuable resources for the future development of new tools for the identification of linked peptides. PMID:24493012

  14. Complex plane integration in the modelling of electromagnetic fields in layered media: part 1. Application to a very large loop

    NASA Astrophysics Data System (ADS)

    Silva, Valdelírio da Silva e.; Régis, Cícero; Howard, Allen Q., Jr.

    2014-02-01

    This paper analyses the details of a procedure for the numerical integration of Hankel transforms in the calculation of the electromagnetic fields generated by a large horizontal loop over a 1D earth. The method performs the integration by deforming the integration path into the complex plane and applying Cauchy's theorem on a modified version of the integrand. The modification is the replacement of the Bessel functions J0 and J1 by the Hankel functions H_0^{(1)} and H_1^{(1)} respectively. The integration in the complex plane takes advantage of the exponentially decaying behaviour of the Hankel functions, allowing calculation on very small segments, instead of the infinite line of the original improper integrals. A crucial point in this problem is the location of the poles. The companion paper shows two methods to estimate the pole locations. We have used this method to calculate the fields of very large loops. Our results show that this method allows the estimation of the integrals with fewer evaluations of the integrand functions than other methods.

  15. Complexity in language acquisition.

    PubMed

    Clark, Alexander; Lappin, Shalom

    2013-01-01

    Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.

  16. Life Outside the Golden Window: Statistical Angles on the Signal-to-Noise Problem

    NASA Astrophysics Data System (ADS)

    Wagman, Michael

    2018-03-01

    Lattice QCD simulations of multi-baryon correlation functions can predict the structure and reactions of nuclei without encountering the baryon chemical potential sign problem. However, they suffer from a signal-to-noise problem where Monte Carlo estimates of observables have quantum fluctuations that are exponentially larger than their average values. Recent lattice QCD results demonstrate that the complex phase of baryon correlations functions relates the baryon signal-to-noise problem to a sign problem and exhibits unexpected statistical behavior resembling a heavy-tailed random walk on the unit circle. Estimators based on differences of correlation function phases evaluated at different Euclidean times are discussed that avoid the usual signal-to-noise problem, instead facing a signal-to-noise problem as the time interval associated with the phase difference is increased, and allow hadronic observables to be determined from arbitrarily large-time correlation functions.

  17. Architecture-driven reuse of code in KASE

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    In order to support the synthesis of large, complex software systems, we need to focus on issues pertaining to the architectural design of a system in addition to algorithm and data structure design. An approach that is based on abstracting the architectural design of a set of problems in the form of a generic architecture, and providing tools that can be used to instantiate the generic architecture for specific problem instances is presented. Such an approach also facilitates reuse of code between different systems belonging to the same problem class. An application of our approach on a realistic problem is described; the results of the exercise are presented; and how our approach compares to other work in this area is discussed.

  18. Strategic planning for disaster recovery with stochastic last mile distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell Whitford; Van Hentenryck, Pascal; Coffrin, Carleton

    2010-01-01

    This paper considers the single commodity allocation problem (SCAP) for disaster recovery, a fundamental problem faced by all populated areas. SCAPs are complex stochastic optimization problems that combine resource allocation, warehouse routing, and parallel fleet routing. Moreover, these problems must be solved under tight runtime constraints to be practical in real-world disaster situations. This paper formalizes the specification of SCAPs and introduces a novel multi-stage hybrid-optimization algorithm that utilizes the strengths of mixed integer programming, constraint programming, and large neighborhood search. The algorithm was validated on hurricane disaster scenarios generated by Los Alamos National Laboratory using state-of-the-art disaster simulation toolsmore » and is deployed to aid federal organizations in the US.« less

  19. Problem analysis of geotechnical well drilling in complex environment

    NASA Astrophysics Data System (ADS)

    Kasenov, A. K.; Biletskiy, M. T.; Ratov, B. T.; Korotchenko, T. V.

    2015-02-01

    The article examines primary causes of problems occurring during the drilling of geotechnical wells (injection, production and monitoring wells) for in-situ leaching to extract uranium in South Kazakhstan. Such a drilling problem as hole caving which is basically caused by various chemical and physical factors (hydraulic, mechanical, etc.) has been thoroughly investigated. The analysis of packing causes has revealed that this problem usually occurs because of insufficient amount of drilling mud being associated with small cross section downward flow and relatively large cross section upward flow. This is explained by the fact that when spear bores are used to drill clay rocks, cutting size is usually rather big and there is a risk for clay particles to coagulate.

  20. Predicting protein structures with a multiplayer online game.

    PubMed

    Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit

    2010-08-05

    People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.

  1. Classification of brain MRI with big data and deep 3D convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Wegmayr, Viktor; Aitharaju, Sai; Buhmann, Joachim

    2018-02-01

    Our ever-aging society faces the growing problem of neurodegenerative diseases, in particular dementia. Magnetic Resonance Imaging provides a unique tool for non-invasive investigation of these brain diseases. However, it is extremely difficult for neurologists to identify complex disease patterns from large amounts of three-dimensional images. In contrast, machine learning excels at automatic pattern recognition from large amounts of data. In particular, deep learning has achieved impressive results in image classification. Unfortunately, its application to medical image classification remains difficult. We consider two reasons for this difficulty: First, volumetric medical image data is considerably scarcer than natural images. Second, the complexity of 3D medical images is much higher compared to common 2D images. To address the problem of small data set size, we assemble the largest dataset ever used for training a deep 3D convolutional neural network to classify brain images as healthy (HC), mild cognitive impairment (MCI) or Alzheimers disease (AD). We use more than 20.000 images from subjects of these three classes, which is almost 9x the size of the previously largest data set. The problem of high dimensionality is addressed by using a deep 3D convolutional neural network, which is state-of-the-art in large-scale image classification. We exploit its ability to process the images directly, only with standard preprocessing, but without the need for elaborate feature engineering. Compared to other work, our workflow is considerably simpler, which increases clinical applicability. Accuracy is measured on the ADNI+AIBL data sets, and the independent CADDementia benchmark.

  2. Internet computer coaches for introductory physics problem solving

    NASA Astrophysics Data System (ADS)

    Xu Ryan, Qing

    The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the educational system, national studies have shown that the majority of students emerge from such courses having made little progress toward developing good problem-solving skills. The Physics Education Research Group at the University of Minnesota has been developing Internet computer coaches to help students become more expert-like problem solvers. During the Fall 2011 and Spring 2013 semesters, the coaches were introduced into large sections (200+ students) of the calculus based introductory mechanics course at the University of Minnesota. This dissertation, will address the research background of the project, including the pedagogical design of the coaches and the assessment of problem solving. The methodological framework of conducting experiments will be explained. The data collected from the large-scale experimental studies will be discussed from the following aspects: the usage and usability of these coaches; the usefulness perceived by students; and the usefulness measured by final exam and problem solving rubric. It will also address the implications drawn from this study, including using this data to direct future coach design and difficulties in conducting authentic assessment of problem-solving.

  3. Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.

    2010-08-01

    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  4. A new decision sciences for complex systems.

    PubMed

    Lempert, Robert J

    2002-05-14

    Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.

  5. Comparing the basins of attraction for several methods in the circular Sitnikov problem with spheroid primaries

    NASA Astrophysics Data System (ADS)

    Zotos, Euaggelos E.

    2018-06-01

    The circular Sitnikov problem, where the two primary bodies are prolate or oblate spheroids, is numerically investigated. In particular, the basins of convergence on the complex plane are revealed by using a large collection of numerical methods of several order. We consider four cases, regarding the value of the oblateness coefficient which determines the nature of the roots (attractors) of the system. For all cases we use the iterative schemes for performing a thorough and systematic classification of the nodes on the complex plane. The distribution of the iterations as well as the probability and their correlations with the corresponding basins of convergence are also discussed. Our numerical computations indicate that most of the iterative schemes provide relatively similar convergence structures on the complex plane. However, there are some numerical methods for which the corresponding basins of attraction are extremely complicated with highly fractal basin boundaries. Moreover, it is proved that the efficiency strongly varies between the numerical methods.

  6. Object oriented development of engineering software using CLIPS

    NASA Technical Reports Server (NTRS)

    Yoon, C. John

    1991-01-01

    Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.

  7. Computational complexities and storage requirements of some Riccati equation solvers

    NASA Technical Reports Server (NTRS)

    Utku, Senol; Garba, John A.; Ramesh, A. V.

    1989-01-01

    The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.

  8. Recent experience in simultaneous control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Ramaker, R.; Milman, M.

    1989-01-01

    To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.

  9. Asymptotic behavior of solutions of the renormalization group K-epsilon turbulence model

    NASA Technical Reports Server (NTRS)

    Yakhot, A.; Staroselsky, I.; Orszag, S. A.

    1994-01-01

    Presently, the only efficient way to calculate turbulent flows in complex geometries of engineering interest is to use Reynolds-average Navier-Stokes (RANS) equations. As compared to the original Navier-Stokes problem, these RANS equations posses much more complicated nonlinear structure and may exhibit far more complex nonlinear behavior. In certain cases, the asymptotic behavior of such models can be studied analytically which, aside from being an interesting fundamental problem, is important for better understanding of the internal structure of the models as well as to improve their performances. The renormalization group (RNG) K-epsilon turbulence model, derived directly from the incompresible Navier-Stokes equations, is analyzed. It has already been used to calculate a variety of turbulent and transitional flows in complex geometries. For large values of the RNG viscosity parameter, the model may exhibit singular behavior. In the form of the RNG K-epsilon model that avoids the use of explicit wall functions, a = 1, so the RNG viscosity parameter must be smaller than 23.62 to avoid singularities.

  10. Health conditions and health-policy innovations in Brazil: the way forward.

    PubMed

    Victora, Cesar G; Barreto, Mauricio L; do Carmo Leal, Maria; Monteiro, Carlos A; Schmidt, Maria Ines; Paim, Jairnilson; Bastos, Francisco I; Almeida, Celia; Bahia, Ligia; Travassos, Claudia; Reichenheim, Michael; Barros, Fernando C

    2011-06-11

    Brazil is a large complex country that is undergoing rapid economic, social, and environmental change. In this Series of six articles, we have reported important improvements in health status and life expectancy, which can be ascribed largely to progress in social determinants of health and to implementation of a comprehensive national health system with strong social participation. Many challenges remain, however. Socioeconomic and regional disparities are still unacceptably large, reflecting the fact that much progress is still needed to improve basic living conditions for a large proportion of the population. New health problems arise as a result of urbanisation and social and environmental change, and some old health issues remain unabated. Administration of a complex, decentralised public-health system, in which a large share of services is contracted out to the private sector, together with many private insurance providers, inevitably causes conflict and contradiction. The challenge is ultimately political, and we conclude with a call for action that requires continuous engagement by Brazilian society as a whole in securing the right to health for all Brazilian people. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Contamination of the freshwater ecosystem by pesticides

    USGS Publications Warehouse

    Cope, Oliver B.

    1966-01-01

    A large part of our disquieting present-day pesticide problem is intimately tied to the freshwater ecosystem. Economic poisons are used in so many types of terrain to control so many kinds of organisms that almost all lakes and streams are likely to be contaminated. In addition to accidental contamination many pesticides are deliberately applied directly to fresh waters for suppression of aquatic animals or plants. The problem is intensified because of the extreme susceptibility of freshwater organisms. The complexity of freshwater environments and their variety makes it difficult to comprehend the total effect of pesticides.

  12. Challenges in an Aging Society: Presidential Address to APPAM

    ERIC Educational Resources Information Center

    Swartz, Katherine

    2010-01-01

    The United States is at a critical crossroads in its history right now. The public policy problems that the people are facing are complex and interrelated, and the demographic changes that are about to significantly change their country are not well understood by large numbers of people. In this presidential address to the Association for Public…

  13. Using REU Projects and Crowdsourcing to Facilitate Learning on Demand

    ERIC Educational Resources Information Center

    Liu, Hong P.; Klein, Jerry E.

    2013-01-01

    With the increasing complexity of technology and large quantities of data in our digital age, learning and training has become a major cost of employers. Employee competence depends more and more on how quickly one can acquire new knowledge and solve problems to meet pressing deadlines. This paper presents a practical method to use REU (Research…

  14. Integration of Digital Technology and Innovative Strategies for Learning and Teaching Large Classes: A Calculus Case Study

    ERIC Educational Resources Information Center

    Vajravelu, Kuppalapalle; Muhs, Tammy

    2016-01-01

    Successful science and engineering programs require proficiency and dynamics in mathematics classes to enhance the learning of complex subject matter with a sufficient amount of practical problem solving. Improving student performance and retention in mathematics classes requires inventive approaches. At the University of Central Florida (UCF) the…

  15. Is Relational Reasoning Dependent on Language? A Voxel-Based Lesion Symptom Mapping Study

    ERIC Educational Resources Information Center

    Baldo, Juliana V.; Bunge, Silvia A.; Wilson, Stephen M.; Dronkers, Nina F.

    2010-01-01

    Previous studies with brain-injured patients have suggested that language abilities are necessary for complex problem-solving, even when tasks are non-verbal. In the current study, we tested this notion by analyzing behavioral and neuroimaging data from a large group of left-hemisphere stroke patients (n = 107) suffering from a range of language…

  16. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE PAGES

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef; ...

    2016-10-01

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  17. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  18. Research on the adaptive optical control technology based on DSP

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolu; Xue, Qiao; Zeng, Fa; Zhao, Junpu; Zheng, Kuixing; Su, Jingqin; Dai, Wanjun

    2018-02-01

    Adaptive optics is a real-time compensation technique using high speed support system for wavefront errors caused by atmospheric turbulence. However, the randomness and instantaneity of atmospheric changing introduce great difficulties to the design of adaptive optical systems. A large number of complex real-time operations lead to large delay, which is an insurmountable problem. To solve this problem, hardware operation and parallel processing strategy are proposed, and a high-speed adaptive optical control system based on DSP is developed. The hardware counter is used to check the system. The results show that the system can complete a closed loop control in 7.1ms, and improve the controlling bandwidth of the adaptive optical system. Using this system, the wavefront measurement and closed loop experiment are carried out, and obtain the good results.

  19. Scheduling multirobot operations in manufacturing by truncated Petri nets

    NASA Astrophysics Data System (ADS)

    Chen, Qin; Luh, J. Y.

    1995-08-01

    Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.

  20. Environmental refugees in a globally warmed world

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, N.

    1993-12-01

    This paper examines the complex problem of environmental refugees as among the most serious of all the effects of global warming. Shoreline erosion, coastal flooding, and agricultural disruption from drought, soil erosion and desertification are factors now and in the future in creating a group of environmental refugees. Estimates are that at least 10 million such refugees exist today. A preliminary analysis is presented here as a first attempt to understand the full character and extent of the problem. Countries with large delta and coastal areas and large populations are at particular risk from sea-level rise of as little asmore » .5 - 1 meter, compounded by storm surge and salt water intrusions. Bangladesh, Egypt, China, and India are discussed in detail along with Island states at risk. Other global warming effects such as shifts in monsoon systems and severe and persistent droughts make agriculture particularly vulnerable. Lack of soil moisture is during the growing season will probably be the primary problem. Additional and compounding environmental problems are discussed, and an overview of the economic, sociocultural and political consequences is given. 96 refs., 1 tab.« less

  1. Developing science gateways for drug discovery in a grid environment.

    PubMed

    Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra

    2016-01-01

    Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.

  2. A Practical Measure for the Complexity of Evolving Seismicity Patterns

    NASA Astrophysics Data System (ADS)

    Goltz, C.

    2005-12-01

    Earthquakes are a "complex" phenomenon. There is, however, no clear definition of what complexity actually is. Yet, it is important to distinguish between what is merely complicated and what is complex in the sense that simple rules can give rise to very rich behaviour. Seismicity is certainly a complicated phenomenon (difficult to understand) but simple models such as cellular automata indicate that earthquakes are truly complex. From the observational point of view, there exists the problem of quantification of complexity in real world seismicity patterns. Such a measurement is desirable, not only for fundamental understanding but also for monitoring and possibly for forecasting. Maybe the most workable definitions of complexity exist in informatics, summarised under the topic of algorithmic complexity. Here, after introducing the concepts, I apply such a measure of complexity to temporally evolving real-world seismicity patterns. Finally, I discuss the usefulness of the approach and regard the results in view of the occurrence of large earthquakes.

  3. Visual Perception-Based Statistical Modeling of Complex Grain Image for Product Quality Monitoring and Supervision on Assembly Production Line

    PubMed Central

    Chen, Qing; Xu, Pengfei; Liu, Wenzhong

    2016-01-01

    Computer vision as a fast, low-cost, noncontact, and online monitoring technology has been an important tool to inspect product quality, particularly on a large-scale assembly production line. However, the current industrial vision system is far from satisfactory in the intelligent perception of complex grain images, comprising a large number of local homogeneous fragmentations or patches without distinct foreground and background. We attempt to solve this problem based on the statistical modeling of spatial structures of grain images. We present a physical explanation in advance to indicate that the spatial structures of the complex grain images are subject to a representative Weibull distribution according to the theory of sequential fragmentation, which is well known in the continued comminution of ore grinding. To delineate the spatial structure of the grain image, we present a method of multiscale and omnidirectional Gaussian derivative filtering. Then, a product quality classifier based on sparse multikernel–least squares support vector machine is proposed to solve the low-confidence classification problem of imbalanced data distribution. The proposed method is applied on the assembly line of a food-processing enterprise to classify (or identify) automatically the production quality of rice. The experiments on the real application case, compared with the commonly used methods, illustrate the validity of our method. PMID:26986726

  4. The mathematical statement for the solving of the problem of N-version software system design

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The N-version programming, as a methodology of the fault-tolerant software systems design, allows successful solving of the mentioned tasks. The use of N-version programming approach turns out to be effective, since the system is constructed out of several parallel executed versions of some software module. Those versions are written to meet the same specification but by different programmers. The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality.

  5. Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements

    NASA Astrophysics Data System (ADS)

    Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.

    1999-11-01

    A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.

  6. REVIEWS OF TOPICAL PROBLEMS: Axisymmetric stationary flows in compact astrophysical objects

    NASA Astrophysics Data System (ADS)

    Beskin, Vasilii S.

    1997-07-01

    A review is presented of the analytical results available for a large class of axisymmetric stationary flows in the vicinity of compact astrophysical objects. The determination of the two-dimensional structure of the poloidal magnetic field (hydrodynamic flow field) faces severe difficulties, due to the complexity of the trans-field equation for stationary axisymmetric flows. However, an approach exists which enables direct problems to be solved even within the balance law framework. This possibility arises when an exact solution to the equation is available and flows close to it are investigated. As a result, with the use of simple model problems, the basic features of supersonic flows past real compact objects are determined.

  7. Exploring asynchronous brainstorming in large groups: a field comparison of serial and parallel subgroups.

    PubMed

    de Vreede, Gert-Jan; Briggs, Robert O; Reiter-Palmon, Roni

    2010-04-01

    The aim of this study was to compare the results of two different modes of using multiple groups (instead of one large group) to identify problems and develop solutions. Many of the complex problems facing organizations today require the use of very large groups or collaborations of groups from multiple organizations. There are many logistical problems associated with the use of such large groups, including the ability to bring everyone together at the same time and location. A field study involved two different organizations and compared productivity and satisfaction of group. The approaches included (a) multiple small groups, each completing the entire process from start to end and combining the results at the end (parallel mode); and (b) multiple subgroups, each building on the work provided by previous subgroups (serial mode). Groups using the serial mode produced more elaborations compared with parallel groups, whereas parallel groups produced more unique ideas compared with serial groups. No significant differences were found related to satisfaction with process and outcomes between the two modes. Preferred mode depends on the type of task facing the group. Parallel groups are more suited for tasks for which a variety of new ideas are needed, whereas serial groups are best suited when elaboration and in-depth thinking on the solution are required. Results of this research can guide the development of facilitated sessions of large groups or "teams of teams."

  8. Death of the (traveling) salesman: primates do not show clear evidence of multi-step route planning.

    PubMed

    Janson, Charles

    2014-05-01

    Several comparative studies have linked larger brain size to a fruit-eating diet in primates and other animals. The general explanation for this correlation is that fruit is a complex resource base, consisting of many discrete patches of many species, each with distinct nutritional traits, the production of which changes predictably both within and between seasons. Using this information to devise optimal spatial foraging strategies is among the most difficult problems to solve in all of mathematics, a version of the famous Traveling Salesman Problem. Several authors have suggested that primates might use their large brains and complex cognition to plan foraging strategies that approximate optimal solutions to this problem. Three empirical studies have examined how captive primates move when confronted with the simplest version of the problem: a spatial array of equally valuable goals. These studies have all concluded that the subjects remember many food source locations and show very efficient travel paths; some authors also inferred that the subjects may plan their movements based on considering combinations of three or more future goals at a time. This analysis re-examines critically the claims of planned movement sequences from the evidence presented. The efficiency of observed travel paths is largely consistent with use of the simplest of foraging rules, such as visiting the nearest unused "known" resource. Detailed movement sequences by test subjects are most consistent with a rule that mentally sums spatial information from all unused resources in a given trial into a single "gravity" measure that guides movements to one destination at a time. © 2013 Wiley Periodicals, Inc.

  9. About the bears and the bees: Adaptive responses to asymmetric warfare

    NASA Astrophysics Data System (ADS)

    Ryan, Alex

    Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a 'game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.

  10. About the bears and the bees: Adaptive responses to asymmetric warfare

    NASA Astrophysics Data System (ADS)

    Ryan, Alex

    Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a `game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.

  11. Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources

    PubMed Central

    Leeson, Mark S.

    2014-01-01

    The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371

  12. Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method

    DOE PAGES

    Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...

    2017-11-20

    The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less

  13. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    NASA Technical Reports Server (NTRS)

    Zander, Carol S.

    1988-01-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  14. Innovative Technological Development of Russian Mining Regions (on Example of Kemerovo Region)

    NASA Astrophysics Data System (ADS)

    Shavina, Evgeniya; Kalenov, Oleg

    2017-11-01

    A characteristic trend of many countries modern development is the transition to an innovative economy. At present, this is the only opportunity to secure and maintain a high standard of living for the population. Moreover, innovative development of Russian can be achieved during technological progress in its regions. In this regard, it is necessary to assess the innovative potential of the region and identify the most actual problems that impede the transition to the trajectory of innovative development. The authors outline several main indicators that help to determine the level of innovation and technological development of one of the largest industrial areas of Russia - Kemerovo region. The special economic role of Kemerovo region as a large territorial old-industrial complex of Western Siberia requires a large-scale work to solve the most acute problems of regional development. It is necessary to find the answer for existing problems through the implementation of a system of state regulation aimed at making the innovation component a leading factor of the regional economy competitiveness.

  15. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  16. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  17. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  18. Demonstration of quantum advantage in machine learning

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  19. Development of an object-oriented finite element program: application to metal-forming and impact simulations

    NASA Astrophysics Data System (ADS)

    Pantale, O.; Caperaa, S.; Rakotomalala, R.

    2004-07-01

    During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.

  20. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    PubMed

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  1. Uncertainty Reduction for Stochastic Processes on Complex Networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2018-05-01

    Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.

  2. It's the Physics: Organized Complexity in the Arctic/Midlatitude Weather Controversy

    NASA Astrophysics Data System (ADS)

    Overland, J. E.; Francis, J. A.; Wang, M.

    2017-12-01

    There is intense scientific and public interest in whether major Arctic changes can and will impact mid-latitude weather. Despite numerous workshops and a growing literature, convergence of understanding is lacking, with major objections about possible large impacts within the scientific community. Yet research on the Arctic as a new potential driver in improving subseasonal forecasting at midlatitudes remains a priority. A recent review laid part of the controversy on shortcomings in experimental design and ill-suited metrics, such as examining the influence of only sea-ice loss rather than overall Arctic temperature amplification, and/or calculating averages over large regions, long time periods, or many ensemble members that would tend to obscure event-like Arctic connections. The present analysis lays the difficulty at a deeper level owing to the inherently complex physics. Jet-stream dynamics and weather linkages on the scale of a week to months has characteristics of an organized complex system, with large-scale processes that operate in patterned, quasi-geostrophic ways but whose component feedbacks are continually changing. Arctic linkages may be state dependent, i.e., relationships may be more robust in one atmospheric wave pattern than another, generating intermittency. The observational network is insufficient to fully initialize such a system and the inherent noise obscures linkage signals, leading to an underdetermined problem; often more than one explanation can fit the data. Further, the problem may be computationally irreducible; the only way to know the result of these interactions is to trace out their path over time. Modeling is a suggested approach, but at present it is unclear whether previous model studies fully resolve anticipated complexity. The jet stream from autumn to early winter is characterized by non-linear interactions among enhanced atmospheric planetary waves, irregular transitions between the zonal and meridional flows, and the maintenance of atmospheric blocks (near stationary large amplitude atmospheric waves). For weather forecast improvement, but not necessarily to elucidate mechanism of linkages, a Numerical Weather Prediction (NWP) approach is appropriate; such is the plan for the upcoming Year of Polar Prediction (YOPP).

  3. Detection of expansion at large angle grain boundaries using electron diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balluffi, R.W.; Bristowe, P.D.

    1984-02-01

    Lamarre and Sass (LS) (Scripta Metall. 17: 1141(1983)) observed a grain boundary electron diffraction effect from a large angle twist boundary which they claim can be used to obtain the volume expansion at the grain boundary in a direction normal to it. This paper considers the case where the intensity from the grain boundary region, is close to lattice reflections on the same element of the boundary diffraction lattice. Analysis of this complex problem show that the simplified model of LS is misleading in this case. (DLC)

  4. The Social Life of a Data Base

    NASA Technical Reports Server (NTRS)

    Linde, Charlotte; Wales, Roxana; Clancy, Dan (Technical Monitor)

    2002-01-01

    This paper presents the complex social life of a large data base. The topics include: 1) Social Construction of Mechanisms of Memory; 2) Data Bases: The Invisible Memory Mechanism; 3) The Human in the Machine; 4) Data of the Study: A Large-Scale Problem Reporting Data Base; 5) The PRACA Study; 6) Description of PRACA; 7) PRACA and Paper; 8) Multiple Uses of PRACA; 9) The Work of PRACA; 10) Multiple Forms of Invisibility; 11) Such Systems are Everywhere; and 12) Two Morals to the Story. This paper is in viewgraph form.

  5. RICIS research

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.

    1987-01-01

    The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.

  6. To draw or not to draw? Examining the necessity of problem diagrams using massive open online course experiments

    NASA Astrophysics Data System (ADS)

    Chen, Zhongzhou; Demirci, Neset; Choi, Youn-Jeng; Pritchard, David E.

    2017-06-01

    Previous research on problem diagrams suggested that including a supportive diagram, one that does not provide necessary problem solving information, may bring little, or even negative, benefit to students' problem solving success. We tested the usefulness of problem diagrams on 12 different physics problems (6A/B experiments) in our massive open online course. By analyzing over 8000 student responses in total, we found that including a problem diagram that contains no significant additional information only slightly improves the first attempt correct rate for the few most spatially complex problems, and has little impact on either the final correct percentage or the time spent on solving the problem. On the other hand, in half of the cases, removing the diagram significantly increased the fraction of students' drawing their own diagrams during problem solving. The increase in drawing behavior is largely independent of students' physics abilities. In summary, our results suggest that for many physics problems, the benefit of a diagram is exceedingly small and may not justify the effort of creating one.

  7. Through the Glass Darkly: New Paradigms for Counselors, Courage, and Spirituality in Contemporary Education

    ERIC Educational Resources Information Center

    Goodell, Judy; Robinson, David C.

    2008-01-01

    This article proposes a paradigm shift in the view of the school counselor role. Evolving from the dualistic mind/body split of traditional physics, counseling has largely focused on problem identification and attempting to fix what is wrong. The new sciences of chaos and complexity invite a more holistic view, with the psychospiritual development…

  8. A Set of Vertically Integrated Inquiry-Based Practical Curricula that Develop Scientific Thinking Skills for Large Cohorts of Undergraduate Students

    ERIC Educational Resources Information Center

    Zimbardi, Kirsten; Bugarcic, Andrea; Colthorpe, Kay; Good, Jonathan P.; Lluka, Lesley J.

    2013-01-01

    Science graduates require critical thinking skills to deal with the complex problems they will face in their 21st century workplaces. Inquiry-based curricula can provide students with the opportunities to develop such critical thinking skills; however, evidence suggests that an inappropriate level of autonomy provided to under prepared students…

  9. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    NASA Astrophysics Data System (ADS)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  10. An Overview of Computational Aeroacoustic Modeling at NASA Langley

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2001-01-01

    The use of computational techniques in the area of acoustics is known as computational aeroacoustics and has shown great promise in recent years. Although an ultimate goal is to use computational simulations as a virtual wind tunnel, the problem is so complex that blind applications of traditional algorithms are typically unable to produce acceptable results. The phenomena of interest are inherently unsteady and cover a wide range of frequencies and amplitudes. Nonetheless, with appropriate simplifications and special care to resolve specific phenomena, currently available methods can be used to solve important acoustic problems. These simulations can be used to complement experiments, and often give much more detailed information than can be obtained in a wind tunnel. The use of acoustic analogy methods to inexpensively determine far-field acoustics from near-field unsteadiness has greatly reduced the computational requirements. A few examples of current applications of computational aeroacoustics at NASA Langley are given. There remains a large class of problems that require more accurate and efficient methods. Research to develop more advanced methods that are able to handle the geometric complexity of realistic problems using block-structured and unstructured grids are highlighted.

  11. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays.

    PubMed

    Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok

    2016-01-26

    The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart.

  12. A simple encoding method for Sigma-Delta ADC based biopotential acquisition systems.

    PubMed

    Guerrero, Federico N; Spinelli, Enrique M

    2017-10-01

    Sigma Delta analogue-to-digital converters allow acquiring the full dynamic range of biomedical signals at the electrodes, resulting in less complex hardware and increased measurement robustness. However, the increased data size per sample (typically 24 bits) demands the transmission of extremely large volumes of data across the isolation barrier, thus increasing power consumption on the patient side. This problem is accentuated when a large number of channels is used as in current 128-256 electrodes biopotential acquisition systems, that usually opt for an optic fibre link to the computer. An analogous problem occurs for simpler low-power acquisition platforms that transmit data through a wireless link to a computing platform. In this paper, a low-complexity encoding method is presented to decrease sample data size without losses, while preserving the full DC-coupled signal. The method achieved a 2.3 average compression ratio evaluated over an ECG and EMG signal bank acquired with equipment based on Sigma-Delta converters. It demands a very low processing load: a C language implementation is presented that resulted in an 110 clock cycles average execution on an 8-bit microcontroller.

  13. Expression and assembly of a fully active antibody in algae

    NASA Astrophysics Data System (ADS)

    Mayfield, Stephen P.; Franklin, Scott E.; Lerner, Richard A.

    2003-01-01

    Although combinatorial antibody libraries have solved the problem of access to large immunological repertoires, efficient production of these complex molecules remains a problem. Here we demonstrate the efficient expression of a unique large single-chain (lsc) antibody in the chloroplast of the unicellular, green alga, Chlamydomonas reinhardtii. We achieved high levels of protein accumulation by synthesizing the lsc gene in chloroplast codon bias and by driving expression of the chimeric gene using either of two C. reinhardtii chloroplast promoters and 5' and 3' RNA elements. This lsc antibody, directed against glycoprotein D of the herpes simplex virus, is produced in a soluble form by the alga and assembles into higher order complexes in vivo. Aside from dimerization by disulfide bond formation, the antibody undergoes no detectable posttranslational modification. We further demonstrate that accumulation of the antibody can be modulated by the specific growth regime used to culture the alga, and by the choice of 5' and 3' elements used to drive expression of the antibody gene. These results demonstrate the utility of alga as an expression platform for recombinant proteins, and describe a new type of single chain antibody containing the entire heavy chain protein, including the Fc domain.

  14. Application of NASA General-Purpose Solver to Large-Scale Computations in Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Storaasli, Olaf O.

    2004-01-01

    Of several iterative and direct equation solvers evaluated previously for computations in aeroacoustics, the most promising was the NASA-developed General-Purpose Solver (winner of NASA's 1999 software of the year award). This paper presents detailed, single-processor statistics of the performance of this solver, which has been tailored and optimized for large-scale aeroacoustic computations. The statistics, compiled using an SGI ORIGIN 2000 computer with 12 Gb available memory (RAM) and eight available processors, are the central processing unit time, RAM requirements, and solution error. The equation solver is capable of solving 10 thousand complex unknowns in as little as 0.01 sec using 0.02 Gb RAM, and 8.4 million complex unknowns in slightly less than 3 hours using all 12 Gb. This latter solution is the largest aeroacoustics problem solved to date with this technique. The study was unable to detect any noticeable error in the solution, since noise levels predicted from these solution vectors are in excellent agreement with the noise levels computed from the exact solution. The equation solver provides a means for obtaining numerical solutions to aeroacoustics problems in three dimensions.

  15. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  16. Input-output oriented computation algorithms for the control of large flexible structures

    NASA Technical Reports Server (NTRS)

    Minto, K. D.

    1989-01-01

    An overview is given of work in progress aimed at developing computational algorithms addressing two important aspects in the control of large flexible space structures; namely, the selection and placement of sensors and actuators, and the resulting multivariable control law design problem. The issue of sensor/actuator set selection is particularly crucial to obtaining a satisfactory control design, as clearly a poor choice will inherently limit the degree to which good control can be achieved. With regard to control law design, the researchers are driven by concerns stemming from the practical issues associated with eventual implementation of multivariable control laws, such as reliability, limit protection, multimode operation, sampling rate selection, processor throughput, etc. Naturally, the burden imposed by dealing with these aspects of the problem can be reduced by ensuring that the complexity of the compensator is minimized. Our approach to these problems is based on extensions to input/output oriented techniques that have proven useful in the design of multivariable control systems for aircraft engines. In particular, researchers are exploring the use of relative gain analysis and the condition number as a means of quantifying the process of sensor/actuator selection and placement for shape control of a large space platform.

  17. Prediction of monthly rainfall on homogeneous monsoon regions of India based on large scale circulation patterns using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Kashid, Satishkumar S.; Maity, Rajib

    2012-08-01

    SummaryPrediction of Indian Summer Monsoon Rainfall (ISMR) is of vital importance for Indian economy, and it has been remained a great challenge for hydro-meteorologists due to inherent complexities in the climatic systems. The Large-scale atmospheric circulation patterns from tropical Pacific Ocean (ENSO) and those from tropical Indian Ocean (EQUINOO) are established to influence the Indian Summer Monsoon Rainfall. The information of these two large scale atmospheric circulation patterns in terms of their indices is used to model the complex relationship between Indian Summer Monsoon Rainfall and the ENSO as well as EQUINOO indices. However, extracting the signal from such large-scale indices for modeling such complex systems is significantly difficult. Rainfall predictions have been done for 'All India' as one unit, as well as for five 'homogeneous monsoon regions of India', defined by Indian Institute of Tropical Meteorology. Recent 'Artificial Intelligence' tool 'Genetic Programming' (GP) has been employed for modeling such problem. The Genetic Programming approach is found to capture the complex relationship between the monthly Indian Summer Monsoon Rainfall and large scale atmospheric circulation pattern indices - ENSO and EQUINOO. Research findings of this study indicate that GP-derived monthly rainfall forecasting models, that use large-scale atmospheric circulation information are successful in prediction of All India Summer Monsoon Rainfall with correlation coefficient as good as 0.866, which may appears attractive for such a complex system. A separate analysis is carried out for All India Summer Monsoon rainfall for India as one unit, and five homogeneous monsoon regions, based on ENSO and EQUINOO indices of months of March, April and May only, performed at end of month of May. In this case, All India Summer Monsoon Rainfall could be predicted with 0.70 as correlation coefficient with somewhat lesser Correlation Coefficient (C.C.) values for different 'homogeneous monsoon regions'.

  18. Methodological Problems of Nanotechnoscience

    NASA Astrophysics Data System (ADS)

    Gorokhov, V. G.

    Recently, we have reported on the definitions of nanotechnology as a new type of NanoTechnoScience and on the nanotheory as a cluster of the different natural and engineering theories. Nanotechnology is not only a new type of scientific-engineering discipline, but it evolves also in a “nonclassical” way. Nanoontology or nano scientific world view has a function of the methodological orientation for the choice the theoretical means and methods toward a solution to the scientific and engineering problems. This allows to change from one explanation and scientific world view to another without any problems. Thus, nanotechnology is both a field of scientific knowledge and a sphere of engineering activity, in other words, NanoTechnoScience is similar to Systems Engineering as the analysis and design of large-scale, complex, man/machine systems but micro- and nanosystems. Nano systems engineering as well as Macro systems engineering includes not only systems design but also complex research. Design orientation has influence on the change of the priorities in the complex research and of the relation to the knowledge, not only to “the knowledge about something”, but also to the knowledge as the means of activity: from the beginning control and restructuring of matter at the nano-scale is a necessary element of nanoscience.

  19. Numerical Technology for Large-Scale Computational Electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, R; Champagne, N; White, D

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less

  20. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.

  1. A novel model-based evolutionary algorithm for multi-objective deformable image registration with content mismatch and large deformations: benchmarking efficiency and quality

    NASA Astrophysics Data System (ADS)

    Bouter, Anton; Alderliesten, Tanja; Bosman, Peter A. N.

    2017-02-01

    Taking a multi-objective optimization approach to deformable image registration has recently gained attention, because such an approach removes the requirement of manually tuning the weights of all the involved objectives. Especially for problems that require large complex deformations, this is a non-trivial task. From the resulting Pareto set of solutions one can then much more insightfully select a registration outcome that is most suitable for the problem at hand. To serve as an internal optimization engine, currently used multi-objective algorithms are competent, but rather inefficient. In this paper we largely improve upon this by introducing a multi-objective real-valued adaptation of the recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) for discrete optimization. In this work, GOMEA is tailored specifically to the problem of deformable image registration to obtain substantially improved efficiency. This improvement is achieved by exploiting a key strength of GOMEA: iteratively improving small parts of solutions, allowing to faster exploit the impact of such updates on the objectives at hand through partial evaluations. We performed experiments on three registration problems. In particular, an artificial problem containing a disappearing structure, a pair of pre- and post-operative breast CT scans, and a pair of breast MRI scans acquired in prone and supine position were considered. Results show that compared to the previously used evolutionary algorithm, GOMEA obtains a speed-up of up to a factor of 1600 on the tested registration problems while achieving registration outcomes of similar quality.

  2. A flipped mode teaching approach for large and advanced electrical engineering courses

    NASA Astrophysics Data System (ADS)

    Ravishankar, Jayashri; Epps, Julien; Ambikairajah, Eliathamby

    2018-05-01

    A fully flipped mode teaching approach is challenging for students in advanced engineering courses, because of demanding pre-class preparation load, due to the complex and analytical nature of the topics. When this is applied to large classes, it brings an additional complexity in terms of promoting the intended active learning. This paper presents a novel selective flipped mode teaching approach designed for large and advanced courses that has two aspects: (i) it provides selective flipping of a few topics, while delivering others in traditional face-to-face teaching, to provide an effective trade-off between the two approaches according to the demands of individual topics and (ii) it introduces technology-enabled live in-class quizzes to obtain instant feedback and facilitate collaborative problem-solving exercises. The proposed approach was implemented for a large fourth year course in electrical power engineering over three successive years and the criteria for selecting between the flipped mode teaching and traditional teaching modes are outlined. Results confirmed that the proposed approach improved both students' academic achievements and their engagement in the course, without overloading them during the teaching period.

  3. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  4. Molecular counting by photobleaching in protein complexes with many subunits: best practices and application to the cellulose synthesis complex

    PubMed Central

    Chen, Yalei; Deffenbaugh, Nathan C.; Anderson, Charles T.; Hancock, William O.

    2014-01-01

    The constituents of large, multisubunit protein complexes dictate their functions in cells, but determining their precise molecular makeup in vivo is challenging. One example of such a complex is the cellulose synthesis complex (CSC), which in plants synthesizes cellulose, the most abundant biopolymer on Earth. In growing plant cells, CSCs exist in the plasma membrane as six-lobed rosettes that contain at least three different cellulose synthase (CESA) isoforms, but the number and stoichiometry of CESAs in each CSC are unknown. To begin to address this question, we performed quantitative photobleaching of GFP-tagged AtCESA3-containing particles in living Arabidopsis thaliana cells using variable-angle epifluorescence microscopy and developed a set of information-based step detection procedures to estimate the number of GFP molecules in each particle. The step detection algorithms account for changes in signal variance due to changing numbers of fluorophores, and the subsequent analysis avoids common problems associated with fitting multiple Gaussian functions to binned histogram data. The analysis indicates that at least 10 GFP-AtCESA3 molecules can exist in each particle. These procedures can be applied to photobleaching data for any protein complex with large numbers of fluorescently tagged subunits, providing a new analytical tool with which to probe complex composition and stoichiometry. PMID:25232006

  5. Molecular counting by photobleaching in protein complexes with many subunits: best practices and application to the cellulose synthesis complex

    DOE PAGES

    Chen, Yalei; Deffenbaugh, Nathan C.; Anderson, Charles T.; ...

    2014-09-17

    The constituents of large, multisubunit protein complexes dictate their functions in cells, but determining their precise molecular makeup in vivo is challenging. One example of such a complex is the cellulose synthesis complex (CSC), which in plants synthesizes cellulose, the most abundant biopolymer on Earth. In growing plant cells, CSCs exist in the plasma membrane as six-lobed rosettes that contain at least three different cellulose synthase (CESA) isoforms, but the number and stoichiometry of CESAs in each CSC are unknown. To begin to address this question, we performed quantitative photobleaching of GFP-tagged AtCESA3-containing particles in living Arabidopsis thaliana cells usingmore » variable-angle epifluorescence microscopy and developed a set of information-based step detection procedures to estimate the number of GFP molecules in each particle. The step detection algorithms account for changes in signal variance due to changing numbers of fluorophores, and the subsequent analysis avoids common problems associated with fitting multiple Gaussian functions to binned histogram data. The analysis indicates that at least 10 GFP-AtCESA3 molecules can exist in each particle. In conclusion, these procedures can be applied to photobleaching data for any protein complex with large numbers of fluorescently tagged subunits, providing a new analytical tool with which to probe complex composition and stoichiometry.« less

  6. Confronting America's most ignored crime problem: the Prison Rape Elimination Act of 2003.

    PubMed

    Dumond, Robert W

    2003-01-01

    Prisoner sexual assault has plagued American corrections since its infancy in the 19th century. Although the incidence of prisoner sexual assault is unknown, recent studies reliably suggest the problem is widespread, often affecting the most vulnerable prisoners. The mental health and public health consequences, both within institutions and the community, are complex and devastating, requiring comprehensive intervention and treatment. These crimes have been largely ignored by correctional managers, compromising the safety and security of correctional institutions. The Prison Rape Elimination Act of 2003 could play a vital role in managing a national scandal.

  7. Topics in geophysical fluid dynamics: Atmospheric dynamics, dynamo theory, and climate dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghil, M.; Childress, S.

    1987-01-01

    This text is the first study to apply systematically the successive bifurcations approach to complex time-dependent processes in large scale atmospheric dynamics, geomagnetism, and theoretical climate dynamics. The presentation of recent results on planetary-scale phenomena in the earth's atmosphere, ocean, cryosphere, mantle and core provides an integral account of mathematical theory and methods together with physical phenomena and processes. The authors address a number of problems in rapidly developing areas of geophysics, bringing into closer contact the modern tools of nonlinear mathematics and the novel problems of global change in the environment.

  8. A visual programming environment for the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David

    1988-01-01

    The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.

  9. A comparison of representations for discrete multi-criteria decision problems☆

    PubMed Central

    Gettinger, Johannes; Kiesling, Elmar; Stummer, Christian; Vetschera, Rudolf

    2013-01-01

    Discrete multi-criteria decision problems with numerous Pareto-efficient solution candidates place a significant cognitive burden on the decision maker. An interactive, aspiration-based search process that iteratively progresses toward the most preferred solution can alleviate this task. In this paper, we study three ways of representing such problems in a DSS, and compare them in a laboratory experiment using subjective and objective measures of the decision process as well as solution quality and problem understanding. In addition to an immediate user evaluation, we performed a re-evaluation several weeks later. Furthermore, we consider several levels of problem complexity and user characteristics. Results indicate that different problem representations have a considerable influence on search behavior, although long-term consistency appears to remain unaffected. We also found interesting discrepancies between subjective evaluations and objective measures. Conclusions from our experiments can help designers of DSS for large multi-criteria decision problems to fit problem representations to the goals of their system and the specific task at hand. PMID:24882912

  10. A Comparison of Solver Performance for Complex Gastric Electrophysiology Models

    PubMed Central

    Sathar, Shameer; Cheng, Leo K.; Trew, Mark L.

    2016-01-01

    Computational techniques for solving systems of equations arising in gastric electrophysiology have not been studied for efficient solution process. We present a computationally challenging problem of simulating gastric electrophysiology in anatomically realistic stomach geometries with multiple intracellular and extracellular domains. The multiscale nature of the problem and mesh resolution required to capture geometric and functional features necessitates efficient solution methods if the problem is to be tractable. In this study, we investigated and compared several parallel preconditioners for the linear systems arising from tetrahedral discretisation of electrically isotropic and anisotropic problems, with and without stimuli. The results showed that the isotropic problem was computationally less challenging than the anisotropic problem and that the application of extracellular stimuli increased workload considerably. Preconditioning based on block Jacobi and algebraic multigrid solvers were found to have the best overall solution times and least iteration counts, respectively. The algebraic multigrid preconditioner would be expected to perform better on large problems. PMID:26736543

  11. Development and Application of the Collaborative Optimization Architecture in a Multidisciplinary Design Environment

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Kroo, I. M.

    1995-01-01

    Collaborative optimization is a design architecture applicable in any multidisciplinary analysis environment but specifically intended for large-scale distributed analysis applications. In this approach, a complex problem is hierarchically de- composed along disciplinary boundaries into a number of subproblems which are brought into multidisciplinary agreement by a system-level coordination process. When applied to problems in a multidisciplinary design environment, this scheme has several advantages over traditional solution strategies. These advantageous features include reducing the amount of information transferred between disciplines, the removal of large iteration-loops, allowing the use of different subspace optimizers among the various analysis groups, an analysis framework which is easily parallelized and can operate on heterogenous equipment, and a structural framework that is well-suited for conventional disciplinary organizations. In this article, the collaborative architecture is developed and its mathematical foundation is presented. An example application is also presented which highlights the potential of this method for use in large-scale design applications.

  12. A Comparison of Techniques for Scheduling Fleets of Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    Earth observing satellite (EOS) scheduling is a complex real-world domain representative of a broad class of over-subscription scheduling problems. Over-subscription problems are those where requests for a facility exceed its capacity. These problems arise in a wide variety of NASA and terrestrial domains and are .XI important class of scheduling problems because such facilities often represent large capital investments. We have run experiments comparing multiple variants of the genetic algorithm, hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on two variants of a realistically-sized model of the EOS scheduling problem. These are implemented as permutation-based methods; methods that search in the space of priority orderings of observation requests and evaluate each permutation by using it to drive a greedy scheduler. Simulated annealing performs best and random mutation operators outperform our squeaky (more intelligent) operator. Furthermore, taking smaller steps towards the end of the search improves performance.

  13. Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.

    PubMed

    Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping

    2017-10-06

    Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.

  14. On the problem of constructing a modern, economic radiotelescope complex

    NASA Technical Reports Server (NTRS)

    Bogomolov, A. F.; Sokolov, A. G.; Poperechenko, B. A.; Polyak, V. S.

    1977-01-01

    Criteria for comparing and planning the technical and economic characteristics of large parabolic reflector antenna systems and other types used in radioastronomy and deep space communications are discussed. The experience gained in making and optimizing a series of highly efficient parabolic antennas in the USSR is reviewed. Several ways are indicated for further improving the complex characteristics of antennas similar to the original TNA-1500 64m radio telescope. The suggestions can be applied in planning the characteristics of radiotelescopes which are now being built, in particular, the TNA-8000 with a diameter of 128 m.

  15. Toolsets Maintain Health of Complex Systems

    NASA Technical Reports Server (NTRS)

    2010-01-01

    First featured in Spinoff 2001, Qualtech Systems Inc. (QSI), of Wethersfield, Connecticut, adapted its Testability, Engineering, and Maintenance System (TEAMS) toolset under Small Business Innovation Research (SBIR) contracts from Ames Research Center to strengthen NASA's systems health management approach for its large, complex, and interconnected systems. Today, six NASA field centers utilize the TEAMS toolset, including TEAMS-Designer, TEAMS-RT, TEAMATE, and TEAMS-RDS. TEAMS is also being used on industrial systems that generate power, carry data, refine chemicals, perform medical functions, and produce semiconductor wafers. QSI finds TEAMS can lower costs by decreasing problems requiring service by 30 to 50 percent.

  16. Parametric Study of a YAV-8B Harrier in Ground Effect Using Time-Dependent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.

  17. Progress in developing Poisson-Boltzmann equation solvers

    PubMed Central

    Li, Chuan; Li, Lin; Petukh, Marharyta; Alexov, Emil

    2013-01-01

    This review outlines the recent progress made in developing more accurate and efficient solutions to model electrostatics in systems comprised of bio-macromolecules and nano-objects, the last one referring to objects that do not have biological function themselves but nowadays are frequently used in biophysical and medical approaches in conjunction with bio-macromolecules. The problem of modeling macromolecular electrostatics is reviewed from two different angles: as a mathematical task provided the specific definition of the system to be modeled and as a physical problem aiming to better capture the phenomena occurring in the real experiments. In addition, specific attention is paid to methods to extend the capabilities of the existing solvers to model large systems toward applications of calculations of the electrostatic potential and energies in molecular motors, mitochondria complex, photosynthetic machinery and systems involving large nano-objects. PMID:24199185

  18. Identification challenges for large space structures

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.

    1990-01-01

    The paper examines the on-orbit modal identification of large space structures, stressing the importance of planning and experience, in preparation for the Space Station Structural Characterization Experiment (SSSCE) for the Space Station Freedom. The necessary information to foresee and overcome practical difficulties is considered in connection with seven key factors, including test objectives, dynamic complexity of the structure, data quality, extent of exploratory studies, availability and understanding of software tools, experience with similar problems, and pretest analytical conditions. These factors affect identification success in ground tests. Comparisons with similar ground tests of assembled systems are discussed, showing that the constraints of space tests make these factors more significant. The absence of data and experiences relating to on-orbit modal identification testing is shown to make identification a uniquely mathematical problem, although all spacecraft are constructed and verified by proven engineering methods.

  19. Problems in merging Earth sensing satellite data sets

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.; Goldberg, Michael J.

    1987-01-01

    Satellite remote sensing systems provide a tremendous source of data flow to the Earth science community. These systems provide scientists with data of types and on a scale previously unattainable. Looking forward to the capabilities of Space Station and the Earth Observing System (EOS), the full realization of the potential of satellite remote sensing will be handicapped by inadequate information systems. There is a growing emphasis in Earth science research to ask questions which are multidisciplinary in nature and global in scale. Many of these research projects emphasize the interactions of the land surface, the atmosphere, and the oceans through various physical mechanisms. Conducting this research requires large and complex data sets and teams of multidisciplinary scientists, often working at remote locations. A review of the problems of merging these large volumes of data into spatially referenced and manageable data sets is presented.

  20. Neural encoding of large-scale three-dimensional space-properties and constraints.

    PubMed

    Jeffery, Kate J; Wilson, Jonathan J; Casali, Giulio; Hayman, Robin M

    2015-01-01

    How the brain represents represent large-scale, navigable space has been the topic of intensive investigation for several decades, resulting in the discovery that neurons in a complex network of cortical and subcortical brain regions co-operatively encode distance, direction, place, movement etc. using a variety of different sensory inputs. However, such studies have mainly been conducted in simple laboratory settings in which animals explore small, two-dimensional (i.e., flat) arenas. The real world, by contrast, is complex and three dimensional with hills, valleys, tunnels, branches, and-for species that can swim or fly-large volumetric spaces. Adding an additional dimension to space adds coding challenges, a primary reason for which is that several basic geometric properties are different in three dimensions. This article will explore the consequences of these challenges for the establishment of a functional three-dimensional metric map of space, one of which is that the brains of some species might have evolved to reduce the dimensionality of the representational space and thus sidestep some of these problems.

  1. Factorized Runge-Kutta-Chebyshev Methods

    NASA Astrophysics Data System (ADS)

    O'Sullivan, Stephen

    2017-05-01

    The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc

  2. A laser-sheet flow visualization technique for the large wind tunnels of the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Reinath, M. S.; Ross, J. C.

    1990-01-01

    A flow visualization technique for the large wind tunnels of the National Full Scale Aerodynamics Complex (NFAC) is described. The technique uses a laser sheet generated by the NFAC Long Range Laser Velocimeter (LRLV) to illuminate a smoke-like tracer in the flow. The LRLV optical system is modified slightly, and a scanned mirror is added to generate the sheet. These modifications are described, in addition to the results of an initial performance test conducted in the 80- by 120-Foot Wind Tunnel. During this test, flow visualization was performed in the wake region behind a truck as part of a vehicle drag reduction study. The problems encountered during the test are discussed, in addition to the recommended improvements needed to enhance the performance of the technique for future applications.

  3. Controlling uncertainty: a review of human behavior in complex dynamic environments.

    PubMed

    Osman, Magda

    2010-01-01

    Complex dynamic control (CDC) tasks are a type of problem-solving environment used for examining many cognitive activities (e.g., attention, control, decision making, hypothesis testing, implicit learning, memory, monitoring, planning, and problem solving). Because of their popularity, there have been many findings from diverse domains of research (economics, engineering, ergonomics, human-computer interaction, management, psychology), but they remain largely disconnected from each other. The objective of this article is to review theoretical developments and empirical work on CDC tasks, and to introduce a novel framework (monitoring and control framework) as a tool for integrating theory and findings. The main thesis of the monitoring and control framework is that CDC tasks are characteristically uncertain environments, and subjective judgments of uncertainty guide the way in which monitoring and control behaviors attempt to reduce it. The article concludes by discussing new insights into continuing debates and future directions for research on CDC tasks.

  4. Higher-dimensional communication complexity problems: Classical protocols versus quantum ones based on Bell's theorem or prepare-transmit-measure schemes

    NASA Astrophysics Data System (ADS)

    Tavakoli, Armin; Żukowski, Marek

    2017-04-01

    Communication complexity problems (CCPs) are tasks in which separated parties attempt to compute a function whose inputs are distributed among the parties. Their communication is limited so that not all inputs can be sent. We show that broad classes of Bell inequalities can be mapped to CCPs and that a quantum violation of a Bell inequality is a necessary and sufficient condition for an enhancement of the related CCP beyond its classical limitation. However, one can implement CCPs by transmitting a quantum system, encoding no more information than is allowed in the CCP, and extracting information by performing measurements. We show that for a large class of Bell inequalities, the improvement of the CCP associated with a quantum violation of a Bell inequality can be no greater than the improvement obtained from quantum prepare-transmit-measure strategies.

  5. Equilibrium expert: an add-in to Microsoft Excel for multiple binding equilibrium simulations and parameter estimations.

    PubMed

    Raguin, Olivier; Gruaz-Guyon, Anne; Barbet, Jacques

    2002-11-01

    An add-in to Microsoft Excel was developed to simulate multiple binding equilibriums. A partition function, readily written even when the equilibrium is complex, describes the experimental system. It involves the concentrations of the different free molecular species and of the different complexes present in the experiment. As a result, the software is not restricted to a series of predefined experimental setups but can handle a large variety of problems involving up to nine independent molecular species. Binding parameters are estimated by nonlinear least-square fitting of experimental measurements as supplied by the user. The fitting process allows user-defined weighting of the experimental data. The flexibility of the software and the way it may be used to describe common experimental situations and to deal with usual problems such as tracer reactivity or nonspecific binding is demonstrated by a few examples. The software is available free of charge upon request.

  6. To Cooperate or Not to Cooperate: Why Behavioural Mechanisms Matter

    PubMed Central

    2016-01-01

    Mutualistic cooperation often requires multiple individuals to behave in a coordinated fashion. Hence, while the evolutionary stability of mutualistic cooperation poses no particular theoretical difficulty, its evolutionary emergence faces a chicken and egg problem: an individual cannot benefit from cooperating unless other individuals already do so. Here, we use evolutionary robotic simulations to study the consequences of this problem for the evolution of cooperation. In contrast with standard game-theoretic results, we find that the transition from solitary to cooperative strategies is very unlikely, whether interacting individuals are genetically related (cooperation evolves in 20% of all simulations) or unrelated (only 3% of all simulations). We also observe that successful cooperation between individuals requires the evolution of a specific and rather complex behaviour. This behavioural complexity creates a large fitness valley between solitary and cooperative strategies, making the evolutionary transition difficult. These results reveal the need for research on biological mechanisms which may facilitate this transition. PMID:27148874

  7. Exponential convergence through linear finite element discretization of stratified subdomains

    NASA Astrophysics Data System (ADS)

    Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali

    2016-10-01

    Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.

  8. Brain Dynamics: Methodological Issues and Applications in Psychiatric and Neurologic Diseases

    NASA Astrophysics Data System (ADS)

    Pezard, Laurent

    The human brain is a complex dynamical system generating the EEG signal. Numerical methods developed to study complex physical dynamics have been used to characterize EEG since the mid-eighties. This endeavor raised several issues related to the specificity of EEG. Firstly, theoretical and methodological studies should address the major differences between the dynamics of the human brain and physical systems. Secondly, this approach of EEG signal should prove to be relevant for dealing with physiological or clinical problems. A set of studies performed in our group is presented here within the context of these two problematic aspects. After the discussion of methodological drawbacks, we review numerical simulations related to the high dimension and spatial extension of brain dynamics. Experimental studies in neurologic and psychiatric disease are then presented. We conclude that if it is now clear that brain dynamics changes in relation with clinical situations, methodological problems remain largely unsolved.

  9. H2BC: a new technique for NMR analysis of complex carbohydrates.

    PubMed

    Petersen, Bent O; Vinogradov, Evguenii; Kay, William; Würtz, Peter; Nyberg, Nils T; Duus, Jens Ø; Sørensen, Ole W

    2006-03-20

    It is demonstrated that the H2BC NMR pulse sequence (J. Am. Chem. Soc.2005, 127, 6154, Magn. Reson. Chem.2005, 43, 971-974) offers unambiguous assignments and significant simplification of NMR spectra of large and complex carbohydrates compared to other techniques for the establishment of correlations over more than one bond. H2BC almost exclusively correlates protons and proton-bearing carbon spins separated by two covalent bonds and is independent of occasionally vanishing (2)J(CH) coupling constants, which alleviates the problem of missing two-bond correlations in HMBC spectra. H2BC also solves the problem of distinguishing two- and three-bond correlations in HSQC-TOCSY or HMBC. It is a further asset of H2BC that the experiment is significantly shorter than HMBC and HSQC-TOCSY, and hence less sensitive to transverse relaxation. The H2BC experiment is demonstrated on an approximately 30-residue oligosaccharide from Francisella victoria.

  10. Complex Problem Solving: What It Is and What It Is Not

    PubMed Central

    Dörner, Dietrich; Funke, Joachim

    2017-01-01

    Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems. PMID:28744242

  11. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  12. Geometric quantification of features in large flow fields.

    PubMed

    Kendall, Wesley; Huang, Jian; Peterka, Tom

    2012-01-01

    Interactive exploration of flow features in large-scale 3D unsteady-flow data is one of the most challenging visualization problems today. To comprehensively explore the complex feature spaces in these datasets, a proposed system employs a scalable framework for investigating a multitude of characteristics from traced field lines. This capability supports the examination of various neighborhood-based geometric attributes in concert with other scalar quantities. Such an analysis wasn't previously possible because of the large computational overhead and I/O requirements. The system integrates visual analytics methods by letting users procedurally and interactively describe and extract high-level flow features. An exploration of various phenomena in a large global ocean-modeling simulation demonstrates the approach's generality and expressiveness as well as its efficacy.

  13. Measuring the Complexity of Seismicity Pattern Evolution

    NASA Astrophysics Data System (ADS)

    Goltz, C.

    2004-12-01

    ``Complexity'' has become an ubiquitous term in science. However, there is, much as with ``fractality'', no clear definition of what complexity actually means. Yet, it is important to distinguish between what is merely complicated and what is complex in the sense that simple rules can give rise to very rich behaviour. Seismicity is certainly a complicated phenomenon (difficult to understand) but simple models such as cellular automata indicate that earthquakes are truly complex. From the observational point of view, there exists the problem of quantification of complexity in real world seismicity patterns (in the absence of even a rigid definition of complexity). Such a measurement is desirable, however, not only for fundamental understanding but also for monitoring and possibly for prediction purposes. Maybe the most workable definitions of complexity exist in informatics, summarised under the topic of algorithmic complexity. Here, after introducing the concepts, I apply such measures of complexity to temporally evolving seismicity patterns from different geographic regions. Finally, I discuss the usefulness of the approach and discuss results in view of the occurrence of large earthquakes.

  14. Network Community Detection based on the Physarum-inspired Computational Framework.

    PubMed

    Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili

    2016-12-13

    Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.

  15. Confined One Dimensional Harmonic Oscillator as a Two-Mode System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gueorguiev, V G; Rau, A P; Draayer, J P

    2005-07-11

    The one-dimensional harmonic oscillator in a box problem is possibly the simplest example of a two-mode system. This system has two exactly solvable limits, the harmonic oscillator and a particle in a (one-dimensional) box. Each of the two limits has a characteristic spectral structure describing the two different excitation modes of the system. Near each of these limits, one can use perturbation theory to achieve an accurate description of the eigenstates. Away from the exact limits, however, one has to carry out a matrix diagonalization because the basis-state mixing that occurs is typically too large to be reproduced in anymore » other way. An alternative to casting the problem in terms of one or the other basis set consists of using an ''oblique'' basis that uses both sets. Through a study of this alternative in this one-dimensional problem, we are able to illustrate practical solutions and infer the applicability of the concept for more complex systems, such as in the study of complex nuclei where oblique-basis calculations have been successful.« less

  16. Immersed boundary methods for simulating fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Yang, Xiaolei

    2014-02-01

    Fluid-structure interaction (FSI) problems commonly encountered in engineering and biological applications involve geometrically complex flexible or rigid bodies undergoing large deformations. Immersed boundary (IB) methods have emerged as a powerful simulation tool for tackling such flows due to their inherent ability to handle arbitrarily complex bodies without the need for expensive and cumbersome dynamic re-meshing strategies. Depending on the approach such methods adopt to satisfy boundary conditions on solid surfaces they can be broadly classified as diffused and sharp interface methods. In this review, we present an overview of the fundamentals of both classes of methods with emphasis on solution algorithms for simulating FSI problems. We summarize and juxtapose different IB approaches for imposing boundary conditions, efficient iterative algorithms for solving the incompressible Navier-Stokes equations in the presence of dynamic immersed boundaries, and strong and loose coupling FSI strategies. We also present recent results from the application of such methods to study a wide range of problems, including vortex-induced vibrations, aquatic swimming, insect flying, human walking and renewable energy. Limitations of such methods and the need for future research to mitigate them are also discussed.

  17. Numerical propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Lytle, John K.; Remaklus, David A.; Nichols, Lester D.

    1990-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.

  18. Disentangling the stochastic behavior of complex time series

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Tabar, M. Reza Rahimi; Peinke, Joachim; Lehnertz, Klaus

    2016-10-01

    Complex systems involving a large number of degrees of freedom, generally exhibit non-stationary dynamics, which can result in either continuous or discontinuous sample paths of the corresponding time series. The latter sample paths may be caused by discontinuous events - or jumps - with some distributed amplitudes, and disentangling effects caused by such jumps from effects caused by normal diffusion processes is a main problem for a detailed understanding of stochastic dynamics of complex systems. Here we introduce a non-parametric method to address this general problem. By means of a stochastic dynamical jump-diffusion modelling, we separate deterministic drift terms from different stochastic behaviors, namely diffusive and jumpy ones, and show that all of the unknown functions and coefficients of this modelling can be derived directly from measured time series. We demonstrate appli- cability of our method to empirical observations by a data-driven inference of the deterministic drift term and of the diffusive and jumpy behavior in brain dynamics from ten epilepsy patients. Particularly these different stochastic behaviors provide extra information that can be regarded valuable for diagnostic purposes.

  19. Accounting for Parameter Uncertainty in Complex Atmospheric Models, With an Application to Greenhouse Gas Emissions Evaluation

    NASA Astrophysics Data System (ADS)

    Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.

    2016-12-01

    In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.

  20. Conversion of wastelands into state ownership for the needs of high-rise construction

    NASA Astrophysics Data System (ADS)

    Ganebnykh, Elena

    2018-03-01

    High-rise construction in big cities faces the problem of land shortage in downtown areas. Audit of economic complexes showed a large volume of wastelands. The conversion of wastelands into state and municipal ownership helps in part to solve the problem of the lack of space for high-rise construction in the urban area in the format of infill construction. The article investigates the problem of the conversion of wastelands into state and municipal ownership. The research revealed no clear algorithm for converting wastelands into state and municipal ownership. To form a unified system for identifying such plots, a universal algorithm was developed to identify and convert ownerless immovable property into state or municipal ownership.

  1. Games that Enlist Collective Intelligence to Solve Complex Scientific Problems.

    PubMed

    Burnett, Stephen; Furlong, Michelle; Melvin, Paul Guy; Singiser, Richard

    2016-03-01

    There is great value in employing the collective problem-solving power of large groups of people. Technological advances have allowed computer games to be utilized by a diverse population to solve problems. Science games are becoming more popular and cover various areas such as sequence alignments, DNA base-pairing, and protein and RNA folding. While these tools have been developed for the general population, they can also be used effectively in the classroom to teach students about various topics. Many games also employ a social component that entices students to continue playing and thereby to continue learning. The basic functions of game play and the potential of game play as a tool in the classroom are discussed in this article.

  2. Games that Enlist Collective Intelligence to Solve Complex Scientific Problems

    PubMed Central

    Burnett, Stephen; Furlong, Michelle; Melvin, Paul Guy; Singiser, Richard

    2016-01-01

    There is great value in employing the collective problem-solving power of large groups of people. Technological advances have allowed computer games to be utilized by a diverse population to solve problems. Science games are becoming more popular and cover various areas such as sequence alignments, DNA base-pairing, and protein and RNA folding. While these tools have been developed for the general population, they can also be used effectively in the classroom to teach students about various topics. Many games also employ a social component that entices students to continue playing and thereby to continue learning. The basic functions of game play and the potential of game play as a tool in the classroom are discussed in this article. PMID:27047610

  3. Decision-Theoretic Control of Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo; Washington, Richard; Bernstein, Daniel S.; Mouaddib, Abdel-Illah; Morris, Robert (Technical Monitor)

    2003-01-01

    Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We describe two decision-theoretic approaches to maximize the productivity of planetary rovers: one based on adaptive planning and the other on hierarchical reinforcement learning. Both approaches map the problem into a Markov decision problem and attempt to solve a large part of the problem off-line, exploiting the structure of the plan and independence between plan components. We examine the advantages and limitations of these techniques and their scalability.

  4. A survey of automated methods for sensemaking support

    NASA Astrophysics Data System (ADS)

    Llinas, James

    2014-05-01

    Complex, dynamic problems in general present a challenge for the design of analysis support systems and tools largely because there is limited reliable a priori procedural knowledge descriptive of the dynamic processes in the environment. Problem domains that are non-cooperative or adversarial impute added difficulties involving suboptimal observational data and/or data containing the effects of deception or covertness. The fundamental nature of analysis in these environments is based on composite approaches involving mining or foraging over the evidence, discovery and learning processes, and the synthesis of fragmented hypotheses; together, these can be labeled as sensemaking procedures. This paper reviews and analyzes the features, benefits, and limitations of a variety of automated techniques that offer possible support to sensemaking processes in these problem domains.

  5. Application of heuristic satellite plan synthesis algorithms to requirements of the WARC-88 allotment plan

    NASA Technical Reports Server (NTRS)

    Heyward, Ann O.; Reilly, Charles H.; Walton, Eric K.; Mata, Fernando; Olen, Carl

    1990-01-01

    Creation of an Allotment Plan for the Fixed Satellite Service at the 1988 Space World Administrative Radio Conference (WARC) represented a complex satellite plan synthesis problem, involving a large number of planned and existing systems. Solutions to this problem at WARC-88 required the use of both automated and manual procedures to develop an acceptable set of system positions. Development of an Allotment Plan may also be attempted through solution of an optimization problem, known as the Satellite Location Problem (SLP). Three automated heuristic procedures, developed specifically to solve SLP, are presented. The heuristics are then applied to two specific WARC-88 scenarios. Solutions resulting from the fully automated heuristics are then compared with solutions obtained at WARC-88 through a combination of both automated and manual planning efforts.

  6. A case-based, problem-based learning approach to prepare master of public health candidates for the complexities of global health.

    PubMed

    Leon, Juan S; Winskell, Kate; McFarland, Deborah A; del Rio, Carlos

    2015-03-01

    Global health is a dynamic, emerging, and interdisciplinary field. To address current and emerging global health challenges, we need a public health workforce with adaptable and collaborative problem-solving skills. In the 2013-2014 academic year, the Hubert Department of Global Health at the Rollins School of Public Health-Emory University launched an innovative required core course for its first-year Master of Public Health students in the global health track. The course uses a case-based, problem-based learning approach to develop global health competencies. Small teams of students propose solutions to these problems by identifying learning issues and critically analyzing and synthesizing new information. We describe the course structure and logistics used to apply this approach in the context of a large class and share lessons learned.

  7. Water insoluble and soluble lipids for gene delivery.

    PubMed

    Mahato, Ram I

    2005-04-05

    Among various synthetic gene carriers currently in use, liposomes composed of cationic lipids and co-lipids remain the most efficient transfection reagents. Physicochemical properties of lipid/plasmid complexes, such as cationic lipid structure, cationic lipid to co-lipid ratio, charge ratio, particle size and zeta potential have significant influence on gene expression and biodistribution. However, most cationic lipids are toxic and cationic liposomes/plasmid complexes do not disperse well inside the target tissues because of their large particle size. To overcome the problems associated with cationic lipids, we designed water soluble lipopolymers for gene delivery to various cells and tissues. This review provides a critical discussion on how the components of water insoluble and soluble lipids affect their transfection efficiency and biodistribution of lipid/plasmid complexes.

  8. Multi-level damage identification with response reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Chao-Dong; Xu, You-Lin

    2017-10-01

    Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.

  9. Rural Australian Women's Legal Help Seeking for Intimate Partner Violence: Women Intimate Partner Violence Victim Survivors' Perceptions of Criminal Justice Support Services

    ERIC Educational Resources Information Center

    Ragusa, Angela T.

    2013-01-01

    Intimate partner violence (IPV) is a widespread, ongoing, and complex global social problem, whose victims continue to be largely women. Women often prefer to rely on friends and family for IPV help, yet when informal support is unavailable they remain hesitant to contact formal services, particularly legal support for many reasons. This study…

  10. Probing the Topological Properties of Complex Networks Modeling Short Written Texts

    PubMed Central

    Amancio, Diego R.

    2015-01-01

    In recent years, graph theory has been widely employed to probe several language properties. More specifically, the so-called word adjacency model has been proven useful for tackling several practical problems, especially those relying on textual stylistic analysis. The most common approach to treat texts as networks has simply considered either large pieces of texts or entire books. This approach has certainly worked well—many informative discoveries have been made this way—but it raises an uncomfortable question: could there be important topological patterns in small pieces of texts? To address this problem, the topological properties of subtexts sampled from entire books was probed. Statistical analyses performed on a dataset comprising 50 novels revealed that most of the traditional topological measurements are stable for short subtexts. When the performance of the authorship recognition task was analyzed, it was found that a proper sampling yields a discriminability similar to the one found with full texts. Surprisingly, the support vector machine classification based on the characterization of short texts outperformed the one performed with entire books. These findings suggest that a local topological analysis of large documents might improve its global characterization. Most importantly, it was verified, as a proof of principle, that short texts can be analyzed with the methods and concepts of complex networks. As a consequence, the techniques described here can be extended in a straightforward fashion to analyze texts as time-varying complex networks. PMID:25719799

  11. Aeroelastic analysis of versatile thermal insulation (VTI) panels with pinched boundary conditions

    NASA Astrophysics Data System (ADS)

    Carrera, Erasmo; Zappino, Enrico; Patočka, Karel; Komarek, Martin; Ferrarese, Adriano; Montabone, Mauro; Kotzias, Bernhard; Huermann, Brian; Schwane, Richard

    2014-03-01

    Launch vehicle design and analysis is a crucial problem in space engineering. The large range of external conditions and the complexity of space vehicles make the solution of the problem really challenging. The problem considered in the present work deals with the versatile thermal insulation (VTI) panel. This thermal protection system is designed to reduce heat fluxes on the LH2 tank during the long coasting phases. Because of the unconventional boundary conditions and the large-scale geometry of the panel, the aeroelastic behaviour of VTI is investigated in the present work. Known available results from literature related to similar problem, are reviewed by considering the effect of various Mach regimes, including boundary layer thickness effects, in-plane mechanical and thermal loads, non-linear effects and amplitude of limit cycle oscillations. A dedicated finite element model is developed for the supersonic regime. The models used for coupling the orthotropic layered structural model with Piston Theory aerodynamic models allow the calculations of flutter conditions in case of curved panels supported in a discrete number of points. An advanced computational aeroelasticity tool is developed using various dedicated commercial softwares (CFX, ZAERO, EDGE). A wind tunnel test campaign is carried out to assess the computational tool in the analysis of this type of problem.

  12. Managing Network Partitions in Structured P2P Networks

    NASA Astrophysics Data System (ADS)

    Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif

    Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.

  13. Adaptive Wavelet Modeling of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H.; Dahmen, W.; Vorloeper, J.

    2009-12-01

    Despite the ever-increasing power of modern computers, realistic modeling of complex three-dimensional Earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modeling approaches includes either finite difference or non-adaptive finite element algorithms, and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behavior of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modeled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet based approach that is applicable to a large scope of problems, also including nonlinear problems. To the best of our knowledge such algorithms have not yet been applied in geophysics. Adaptive wavelet algorithms offer several attractive features: (i) for a given subsurface model, they allow the forward modeling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient, and (iii) the modeling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving three-dimensional geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best fit subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectrical modeling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with spatially highly variable electrical conductivities. The linear dependency of the modeling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  14. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.

  15. Ability of the Child Behavior Checklist-Dysregulation Profile and the Youth Self Report-Dysregulation Profile to identify serious psychopathology and association with correlated problems in high-risk children and adolescents.

    PubMed

    Dölitzsch, Claudia; Kölch, Michael; Fegert, Jörg M; Schmeck, Klaus; Schmid, Marc

    2016-11-15

    The current analyses examined whether the dysregulation profile (DP) 1) could be used to identify children and adolescents at high risk for complex and serious psychopathology and 2) was correlated to other emotional and behavioral problems (such as delinquent behavior or suicide ideation). DP was assessed using both the Child Behavior Checklist (CBCL) and the Youth Self Report (YSR) in a residential care sample. Children and adolescents (N=374) aged 10-18 years living in residential care in Switzerland completed the YSR, and their professional caregivers completed the CBCL. Participants meeting criteria for DP (T-score ≥67 on the anxious/‌depressed, attention problems, and aggressive behavior scales of the YSR/CBCL) were compared against those who did not for the presence of complex psychopathology (defined as the presence of both emotional and behavioral disorders), and also for the prevalence of several psychiatric diagnoses, suicidal ideation, traumatic experiences, delinquent behaviors, and problems related to quality of life. The diagnostic criteria for CBCL-DP and YSR-DP were met by just 44 (11.8%) and 25 (6.7%) of participants. Only eight participants (2.1%) met the criteria on both instruments. Further analyses were conducted separately for the CBCL-DP and YSR-DP groups. DP was associated with complex psychopathology in only 34.4% of cases according to CBCL and in 60% of cases according to YSR. YSR-DP was somewhat more likely to be associated with psychiatric disorders and associated problems than was the CBCL-DP. Because of the relatively small overlap between the CBCL-DP and YSR-DP, analyses were conducted largely with different samples, likely contributing to the different results. Despite a high rate of psychopathology in the population studied, both the YSR-DP and the CBCL-DP were able to detect only a small proportion of those with complex psychiatric disorders. This result questions the validity of YSR-DP and the CBCL-DP in detecting subjects with complex and serious psychopathology. It is possible that different screening instruments may be more effective. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Detection of expression quantitative trait Loci in complex mouse crosses: impact and alleviation of data quality and complex population substructure.

    PubMed

    Iancu, Ovidiu D; Darakjian, Priscila; Kawane, Sunita; Bottomly, Daniel; Hitzemann, Robert; McWeeney, Shannon

    2012-01-01

    Complex Mus musculus crosses, e.g., heterogeneous stock (HS), provide increased resolution for quantitative trait loci detection. However, increased genetic complexity challenges detection methods, with discordant results due to low data quality or complex genetic architecture. We quantified the impact of theses factors across three mouse crosses and two different detection methods, identifying procedures that greatly improve detection quality. Importantly, HS populations have complex genetic architectures not fully captured by the whole genome kinship matrix, calling for incorporating chromosome specific relatedness information. We analyze three increasingly complex crosses, using gene expression levels as quantitative traits. The three crosses were an F(2) intercross, a HS formed by crossing four inbred strains (HS4), and a HS (HS-CC) derived from the eight lines found in the collaborative cross. Brain (striatum) gene expression and genotype data were obtained using the Illumina platform. We found large disparities between methods, with concordance varying as genetic complexity increased; this problem was more acute for probes with distant regulatory elements (trans). A suite of data filtering steps resulted in substantial increases in reproducibility. Genetic relatedness between samples generated overabundance of detected eQTLs; an adjustment procedure that includes the kinship matrix attenuates this problem. However, we find that relatedness between individuals is not evenly distributed across the genome; information from distinct chromosomes results in relatedness structure different from the whole genome kinship matrix. Shared polymorphisms from distinct chromosomes collectively affect expression levels, confounding eQTL detection. We suggest that considering chromosome specific relatedness can result in improved eQTL detection.

  17. Fast and accurate detection of spread source in large complex networks.

    PubMed

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  18. REVIEWS OF TOPICAL PROBLEMS: Large-scale star formation in galaxies

    NASA Astrophysics Data System (ADS)

    Efremov, Yurii N.; Chernin, Artur D.

    2003-01-01

    A brief review is given of the history of modern ideas on the ongoing star formation process in the gaseous disks of galaxies. Recent studies demonstrate the key role of the interplay between the gas self-gravitation and its turbulent motions. The large scale supersonic gas flows create structures of enhanced density which then give rise to the gravitational condensation of gas into stars and star clusters. Formation of star clusters, associations and complexes is considered, as well as the possibility of isolated star formation. Special emphasis is placed on star formation under the action of ram pressure.

  19. A review on simple assembly line balancing type-e problem

    NASA Astrophysics Data System (ADS)

    Jusop, M.; Rashid, M. F. F. Ab

    2015-12-01

    Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.

  20. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  1. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  2. Boosting-Based Optimization as a Generic Framework for Novelty and Fraud Detection in Complex Strategies

    NASA Astrophysics Data System (ADS)

    Gavrishchaka, Valeriy V.; Kovbasinskaya, Maria; Monina, Maria

    2008-11-01

    Novelty detection is a very desirable additional feature of any practical classification or forecasting system. Novelty and rare patterns detection is the main objective in such applications as fault/abnormality discovery in complex technical and biological systems, fraud detection and risk management in financial and insurance industry. Although many interdisciplinary approaches for rare event modeling and novelty detection have been proposed, significant data incompleteness due to the nature of the problem makes it difficult to find a universal solution. Even more challenging and much less formalized problem is novelty detection in complex strategies and models where practical performance criteria are usually multi-objective and the best state-of-the-art solution is often not known due to the complexity of the task and/or proprietary nature of the application area. For example, it is much more difficult to detect a series of small insider trading or other illegal transactions mixed with valid operations and distributed over long time period according to a well-designed strategy than a single, large fraudulent transaction. Recently proposed boosting-based optimization was shown to be an effective generic tool for the discovery of stable multi-component strategies/models from the existing parsimonious base strategies/models in financial and other applications. Here we outline how the same framework can be used for novelty and fraud detection in complex strategies and models.

  3. Decentralized control of large flexible structures by joint decoupling

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Juang, Jer-Nan

    1994-01-01

    This paper presents a novel method to design decentralized controllers for large complex flexible structures by using the idea of joint decoupling. Decoupling of joint degrees of freedom from the interior degrees of freedom is achieved by setting the joint actuator commands to cancel the internal forces exerting on the joint degrees of freedom. By doing so, the interactions between substructures are eliminated. The global structure control design problem is then decomposed into several substructure control design problems. Control commands for interior actuators are set to be localized state feedback using decentralized observers for state estimation. The proposed decentralized controllers can operate successfully at the individual substructure level as well as at the global structure level. Not only control design but also control implementation is decentralized. A two-component mass-spring-damper system is used as an example to demonstrate the proposed method.

  4. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets

    PubMed Central

    Plaku, Erion; Kavraki, Lydia E.

    2009-01-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  5. An expert system executive for automated assembly of large space truss structures

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1993-01-01

    Langley Research Center developed a unique test bed for investigating the practical problems associated with the assembly of large space truss structures using robotic manipulators. The test bed is the result of an interdisciplinary effort that encompasses the full spectrum of assembly problems - from the design of mechanisms to the development of software. The automated structures assembly test bed and its operation are described, the expert system executive and its development are detailed, and the planned system evolution is discussed. Emphasis is on the expert system implementation of the program executive. The executive program must direct and reliably perform complex assembly tasks with the flexibility to recover from realistic system errors. The employment of an expert system permits information that pertains to the operation of the system to be encapsulated concisely within a knowledge base. This consolidation substantially reduced code, increased flexibility, eased software upgrades, and realized a savings in software maintenance costs.

  6. Bio-inspired group modeling and analysis for intruder detection in mobile sensor/robotic networks.

    PubMed

    Fu, Bo; Xiao, Yang; Liang, Xiannuan; Philip Chen, C L

    2015-01-01

    Although previous bio-inspired models have concentrated on invertebrates (such as ants), mammals such as primates with higher cognitive function are valuable for modeling the increasingly complex problems in engineering. Understanding primates' social and communication systems, and applying what is learned from them to engineering domains is likely to inspire solutions to a number of problems. This paper presents a novel bio-inspired approach to determine group size by researching and simulating primate society. Group size does matter for both primate society and digital entities. It is difficult to determine how to group mobile sensors/robots that patrol in a large area when many factors are considered such as patrol efficiency, wireless interference, coverage, inter/intragroup communications, etc. This paper presents a simulation-based theoretical study on patrolling strategies for robot groups with the comparison of large and small groups through simulations and theoretical results.

  7. Optimal placement of multiple types of communicating sensors with availability and coverage redundancy constraints

    NASA Astrophysics Data System (ADS)

    Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.

    2010-04-01

    Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.

  8. Mitochondrial disease associated with complex I (NADH-CoQ oxidoreductase) deficiency.

    PubMed

    Scheffler, Immo E

    2015-05-01

    Mitochondrial diseases due to a reduced capacity for oxidative phosphorylation were first identified more than 20 years ago, and their incidence is now recognized to be quite significant. In a large proportion of cases the problem can be traced to a complex I (NADH-CoQ oxidoreductase) deficiency (Phenotype MIM #252010). Because the complex consists of 44 subunits, there are many potential targets for pathogenic mutations, both on the nuclear and mitochondrial genomes. Surprisingly, however, almost half of the complex I deficiencies are due to defects in as yet unidentified genes that encode proteins other than the structural proteins of the complex. This review attempts to summarize what we know about the molecular basis of complex I deficiencies: mutations in the known structural genes, and mutations in an increasing number of genes encoding "assembly factors", that is, proteins required for the biogenesis of a functional complex I that are not found in the final complex I. More such genes must be identified before definitive genetic counselling can be applied in all cases of affected families.

  9. Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.

    PubMed

    Dzyubak, Oleksandr P; Ritman, Erik L

    2011-01-01

    The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.

  10. Combining complex networks and data mining: Why and how

    NASA Astrophysics Data System (ADS)

    Zanin, M.; Papo, D.; Sousa, P. A.; Menasalvas, E.; Nicchi, A.; Kubik, E.; Boccaletti, S.

    2016-05-01

    The increasing power of computer technology does not dispense with the need to extract meaningful information out of data sets of ever growing size, and indeed typically exacerbates the complexity of this task. To tackle this general problem, two methods have emerged, at chronologically different times, that are now commonly used in the scientific community: data mining and complex network theory. Not only do complex network analysis and data mining share the same general goal, that of extracting information from complex systems to ultimately create a new compact quantifiable representation, but they also often address similar problems too. In the face of that, a surprisingly low number of researchers turn out to resort to both methodologies. One may then be tempted to conclude that these two fields are either largely redundant or totally antithetic. The starting point of this review is that this state of affairs should be put down to contingent rather than conceptual differences, and that these two fields can in fact advantageously be used in a synergistic manner. An overview of both fields is first provided, some fundamental concepts of which are illustrated. A variety of contexts in which complex network theory and data mining have been used in a synergistic manner are then presented. Contexts in which the appropriate integration of complex network metrics can lead to improved classification rates with respect to classical data mining algorithms and, conversely, contexts in which data mining can be used to tackle important issues in complex network theory applications are illustrated. Finally, ways to achieve a tighter integration between complex networks and data mining, and open lines of research are discussed.

  11. A Three-Dimensional Finite-Element Model for Simulating Water Flow in Variably Saturated Porous Media

    NASA Astrophysics Data System (ADS)

    Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.

    1986-12-01

    A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.

  12. Data based identification and prediction of nonlinear and complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-07-01

    The problem of reconstructing nonlinear and complex dynamical systems from measured data or time series is central to many scientific disciplines including physical, biological, computer, and social sciences, as well as engineering and economics. The classic approach to phase-space reconstruction through the methodology of delay-coordinate embedding has been practiced for more than three decades, but the paradigm is effective mostly for low-dimensional dynamical systems. Often, the methodology yields only a topological correspondence of the original system. There are situations in various fields of science and engineering where the systems of interest are complex and high dimensional with many interacting components. A complex system typically exhibits a rich variety of collective dynamics, and it is of great interest to be able to detect, classify, understand, predict, and control the dynamics using data that are becoming increasingly accessible due to the advances of modern information technology. To accomplish these goals, especially prediction and control, an accurate reconstruction of the original system is required. Nonlinear and complex systems identification aims at inferring, from data, the mathematical equations that govern the dynamical evolution and the complex interaction patterns, or topology, among the various components of the system. With successful reconstruction of the system equations and the connecting topology, it may be possible to address challenging and significant problems such as identification of causal relations among the interacting components and detection of hidden nodes. The "inverse" problem thus presents a grand challenge, requiring new paradigms beyond the traditional delay-coordinate embedding methodology. The past fifteen years have witnessed rapid development of contemporary complex graph theory with broad applications in interdisciplinary science and engineering. The combination of graph, information, and nonlinear dynamical systems theories with tools from statistical physics, optimization, engineering control, applied mathematics, and scientific computing enables the development of a number of paradigms to address the problem of nonlinear and complex systems reconstruction. In this Review, we describe the recent advances in this forefront and rapidly evolving field, with a focus on compressive sensing based methods. In particular, compressive sensing is a paradigm developed in recent years in applied mathematics, electrical engineering, and nonlinear physics to reconstruct sparse signals using only limited data. It has broad applications ranging from image compression/reconstruction to the analysis of large-scale sensor networks, and it has become a powerful technique to obtain high-fidelity signals for applications where sufficient observations are not available. We will describe in detail how compressive sensing can be exploited to address a diverse array of problems in data based reconstruction of nonlinear and complex networked systems. The problems include identification of chaotic systems and prediction of catastrophic bifurcations, forecasting future attractors of time-varying nonlinear systems, reconstruction of complex networks with oscillatory and evolutionary game dynamics, detection of hidden nodes, identification of chaotic elements in neuronal networks, reconstruction of complex geospatial networks and nodal positioning, and reconstruction of complex spreading networks with binary data.. A number of alternative methods, such as those based on system response to external driving, synchronization, and noise-induced dynamical correlation, will also be discussed. Due to the high relevance of network reconstruction to biological sciences, a special section is devoted to a brief survey of the current methods to infer biological networks. Finally, a number of open problems including control and controllability of complex nonlinear dynamical networks are discussed. The methods outlined in this Review are principled on various concepts in complexity science and engineering such as phase transitions, bifurcations, stabilities, and robustness. The methodologies have the potential to significantly improve our ability to understand a variety of complex dynamical systems ranging from gene regulatory systems to social networks toward the ultimate goal of controlling such systems.

  13. A fictitious domain method for fluid/solid coupling applied to the lithosphere/asthenosphere interaction.

    NASA Astrophysics Data System (ADS)

    Cerpa, Nestor; Hassani, Riad; Gerbault, Muriel

    2014-05-01

    A large variety of geodynamical problems can be viewed as a solid/fluid interaction problem coupling two bodies with different physics. In particular the lithosphere/asthenosphere mechanical interaction in subduction zones belongs to this kind of problem, where the solid lithosphere is embedded in the asthenospheric viscous fluid. In many fields (Industry, Civil Engineering,etc.), in which deformations of solid and fluid are "small", numerical modelers consider the exact discretization of both domains and fit as well as possible the shape of the interface between the two domains, solving the discretized physic problems by the Finite Element Method (FEM). Although, in a context of subduction, the lithosphere is submitted to large deformation, and can evolve into a complex geometry, thus leading to important deformation of the surrounding asthenosphere. To alleviate the precise meshing of complex geometries, numerical modelers have developed non-matching interface methods called Fictitious Domain Methods (FDM). The main idea of these methods is to extend the initial problem to a bigger (and simpler) domain. In our version of FDM, we determine the forces at the immersed solid boundary required to minimize (at the least square sense) the difference between fluid and solid velocities at this interface. This method is first-order accurate and the stability depends on the ratio between the fluid background mesh size and the interface discretization. We present the formulation and provide benchmarks and examples showing the potential of the method : 1) A comparison with an analytical solution of a viscous flow around a rigid body. 2) An experiment of a rigid sphere sinking in a viscous fluid (in two and three dimensional cases). 3) A comparison with an analog subduction experiment. Another presentation aims at describing the geodynamical application of this method to Andean subduction dynamics, studying cyclic slab folding on the 660 km discontinuity, and its relationship with flat subduction.

  14. Efficient Record Linkage Algorithms Using Complete Linkage Clustering.

    PubMed

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.

  15. Efficient Record Linkage Algorithms Using Complete Linkage Clustering

    PubMed Central

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604

  16. The design of multiplayer online video game systems

    NASA Astrophysics Data System (ADS)

    Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.

    2003-11-01

    The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.

  17. Algorithm For Optimal Control Of Large Structures

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Garba, John A..; Utku, Senol

    1989-01-01

    Cost of computation appears competitive with other methods. Problem to compute optimal control of forced response of structure with n degrees of freedom identified in terms of smaller number, r, of vibrational modes. Article begins with Hamilton-Jacobi formulation of mechanics and use of quadratic cost functional. Complexity reduced by alternative approach in which quadratic cost functional expressed in terms of control variables only. Leads to iterative solution of second-order time-integral matrix Volterra equation of second kind containing optimal control vector. Cost of algorithm, measured in terms of number of computations required, is of order of, or less than, cost of prior algoritms applied to similar problems.

  18. Group Decision Support System to Aid the Process of Design and Maintenance of Large Scale Systems

    DTIC Science & Technology

    1992-03-23

    from a fuzzy set of user requirements. The overall objective of the project is to develop a system combining the characteristics of a compact computer... AHP ) for hierarchical prioritization. 4) Individual Evaluation and Selection of Alternatives - Allows the decision maker to individually evaluate...its concept of outranking relations. The AHP method supports complex decision problems by successively decomposing and synthesizing various elements

  19. JPRS Report, Soviet Union, Economic Affairs

    DTIC Science & Technology

    1988-10-18

    34Commodities—The Mirror of Cost Accounting"] [Text] A number of large-scale decisions directed toward increasing the production of high-quality...suitable in the sphere of scientific research and experimental design work. It is known, for example, that the number of blueprints , specifications, or...the situation, Yu. Kozyrev , deputy chief of the Department for Problems of the Machine Building Complex of the USSR State Committee for Science and

  20. An implementation of the distributed programming structural synthesis system (PROSSS)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1981-01-01

    A method is described for implementing a flexible software system that combines large, complex programs with small, user-supplied, problem-dependent programs and that distributes their execution between a mainframe and a minicomputer. The Programming Structural Synthesis System (PROSSS) was the specific software system considered. The results of such distributed implementation are flexibility of the optimization procedure organization and versatility of the formulation of constraints and design variables.

  1. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  2. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays

    PubMed Central

    Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok

    2016-01-01

    The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart. PMID:26821029

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Homer; Ashok Varikuti; Xinming Ou

    Various tools exist to analyze enterprise network systems and to produce attack graphs detailing how attackers might penetrate into the system. These attack graphs, however, are often complex and difficult to comprehend fully, and a human user may find it problematic to reach appropriate configuration decisions. This paper presents methodologies that can 1) automatically identify portions of an attack graph that do not help a user to understand the core security problems and so can be trimmed, and 2) automatically group similar attack steps as virtual nodes in a model of the network topology, to immediately increase the understandability ofmore » the data. We believe both methods are important steps toward improving visualization of attack graphs to make them more useful in configuration management for large enterprise networks. We implemented our methods using one of the existing attack-graph toolkits. Initial experimentation shows that the proposed approaches can 1) significantly reduce the complexity of attack graphs by trimming a large portion of the graph that is not needed for a user to understand the security problem, and 2) significantly increase the accessibility and understandability of the data presented in the attack graph by clearly showing, within a generated visualization of the network topology, the number and type of potential attacks to which each host is exposed.« less

  4. Artificial intelligence applied to process signal analysis

    NASA Technical Reports Server (NTRS)

    Corsberg, Dan

    1988-01-01

    Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.

  5. Whither vaccines?

    PubMed

    Rodrigues, Charlene M C; Pinto, Marta V; Sadarangani, Manish; Plotkin, Stanley A

    2017-06-01

    Currently used vaccines have had major effects on eliminating common infections, largely by duplicating the immune responses induced by natural infections. Now vaccinology faces more complex problems, such as waning antibody, immunosenescence, evasion of immunity by the pathogen, deviation of immunity by the microbiome, induction of inhibitory responses, and complexity of the antigens required for protection. Fortunately, vaccine development is now incorporating knowledge from immunology, structural biology, systems biology and synthetic chemistry to meet these challenges. In addition, international organisations are developing new funding and licensing pathways for vaccines aimed at pathogens with epidemic potential that emerge from tropical areas. © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  6. Towards practical multiscale approach for analysis of reinforced concrete structures

    NASA Astrophysics Data System (ADS)

    Moyeda, Arturo; Fish, Jacob

    2017-12-01

    We present a novel multiscale approach for analysis of reinforced concrete structural elements that overcomes two major hurdles in utilization of multiscale technologies in practice: (1) coupling between material and structural scales due to consideration of large representative volume elements (RVE), and (2) computational complexity of solving complex nonlinear multiscale problems. The former is accomplished using a variant of computational continua framework that accounts for sizeable reinforced concrete RVEs by adjusting the location of quadrature points. The latter is accomplished by means of reduced order homogenization customized for structural elements. The proposed multiscale approach has been verified against direct numerical simulations and validated against experimental results.

  7. Incremental electrohydraulic forming - A new approach for the manufacture of structured multifunctional sheet metal blanks

    NASA Astrophysics Data System (ADS)

    Djakow, Eugen; Springer, Robert; Homberg, Werner; Piper, Mark; Tran, Julian; Zibart, Alexander; Kenig, Eugeny

    2017-10-01

    Electrohydraulic Forming (EHF) processes permit the production of complex, sharp-edged geometries even when high-strength materials are used. Unfortunately, the forming zone is often limited as compared to other sheet metal forming processes. The use of a special industrial-robot-based tool setup and an incremental process strategy could provide a promising solution for this problem. This paper describes such an innovative approach using an electrohydraulic incremental forming machine, which can be employed to manufacture the large multifunctional and complex part geometries in steel, aluminium, magnesium and reinforced plastic that are employed in lightweight constructions or heating elements.

  8. Tailoring the Variational Implicit Solvent Method for New Challenges: Biomolecular Recognition and Assembly

    PubMed Central

    Ricci, Clarisse Gravina; Li, Bo; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew

    2018-01-01

    Predicting solvation free energies and describing the complex water behavior that plays an important role in essentially all biological processes is a major challenge from the computational standpoint. While an atomistic, explicit description of the solvent can turn out to be too expensive in large biomolecular systems, most implicit solvent methods fail to capture “dewetting” effects and heterogeneous hydration by relying on a pre-established (i.e., guessed) solvation interface. Here we focus on the Variational Implicit Solvent Method, an implicit solvent method that adds water “plasticity” back to the picture by formulating the solvation free energy as a functional of all possible solvation interfaces. We survey VISM's applications to the problem of molecular recognition and report some of the most recent efforts to tailor VISM for more challenging scenarios, with the ultimate goal of including thermal fluctuations into the framework. The advances reported herein pave the way to make VISM a uniquely successful approach to characterize complex solvation properties in the recognition and binding of large-scale biomolecular complexes. PMID:29484300

  9. You Can't Get There From Here! Problems and Potential Solutions in Developing New Classes of Complex Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Truszkowski, Walter F.; Rouff, Christopher A.; Sterritt, Roy

    2005-01-01

    The explosion of capabilities and new products within the sphere of Information Technology (IT) has fostered widespread, overly optimistic opinions regarding the industry, based on common but unjustified assumptions of quality and correctness of software. These assumptions are encouraged by software producers and vendors, who at this late date have not succeeded in finding a way to overcome the lack of an automated, mathematically sound way to develop correct systems from requirements. NASA faces this dilemma as it envisages advanced mission concepts that involve large swarms of small spacecraft that will engage cooperatively to acheve science goals. Such missions entail levels of complexity that beg for new methods for system development far beyond today's methods, which are inadequate for ensuring correct behavior of large numbers of interacting intelligent mission elements. New system development techniques recently devised through NASA-led research will offer some innovative approaches to achieving correctness in complex system development, including autonomous swarm missions that exhibit emergent behavior, as well as general software products created by the computing industry.

  10. Physically-enhanced data visualisation: towards real time solution of Partial Differential Equations in 3D domains

    NASA Astrophysics Data System (ADS)

    Zlotnik, Sergio

    2017-04-01

    Information provided by visualisation environments can be largely increased if the data shown is combined with some relevant physical processes and the used is allowed to interact with those processes. This is particularly interesting in VR environments where the user has a deep interplay with the data. For example, a geological seismic line in a 3D "cave" shows information of the geological structure of the subsoil. The available information could be enhanced with the thermal state of the region under study, with water-flow patterns in porous rocks or with rock displacements under some stress conditions. The information added by the physical processes is usually the output of some numerical technique applied to solve a Partial Differential Equation (PDE) that describes the underlying physics. Many techniques are available to obtain numerical solutions of PDE (e.g. Finite Elements, Finite Volumes, Finite Differences, etc). Although, all these traditional techniques require very large computational resources (particularly in 3D), making them useless in a real time visualization environment -such as VR- because the time required to compute a solution is measured in minutes or even in hours. We present here a novel alternative for the resolution of PDE-based problems that is able to provide a 3D solutions for a very large family of problems in real time. That is, the solution is evaluated in a one thousands of a second, making the solver ideal to be embedded into VR environments. Based on Model Order Reduction ideas, the proposed technique divides the computational work in to a computationally intensive "offline" phase, that is run only once in a life time, and an "online" phase that allow the real time evaluation of any solution within a family of problems. Preliminary examples of real time solutions of complex PDE-based problems will be presented, including thermal problems, flow problems, wave problems and some simple coupled problems.

  11. What Explains Patterns of Diversification and Richness among Animal Phyla?

    PubMed Central

    Jezkova, Tereza; Wiens, John J.

    2016-01-01

    Animal phyla vary dramatically in species richness (from 1 species to >1.2 million), but the causes of this variation remain largely unknown. Animals have also evolved striking variation in morphology and ecology, including sessile marine taxa lacking heads, eyes, limbs, and complex organs (e.g. sponges), parasitic worms (e.g. nematodes, platyhelminths), and taxa with eyes, skeletons, limbs, and complex organs that dominate terrestrial ecosystems (arthropods, chordates). Relating this remarkable variation in traits to the diversification and richness of animal phyla is a fundamental yet unresolved problem in biology. Here, we test the impacts of 18 traits (including morphology, ecology, reproduction, and development) on diversification and richness of extant animal phyla. Using phylogenetic multiple regression, the best-fitting model includes five traits that explain ~74% of the variation in diversification rates (dioecy, parasitism, eyes/photoreceptors, a skeleton, non-marine habitat). However, a model including just three (skeleton, parasitism, habitat) explains nearly as much variation (~67%). Diversification rates then largely explain richness patterns. Our results also identify many striking traits that have surprisingly little impact on diversification (e.g. head, limbs, and complex circulatory and digestive systems). Overall, our results reveal the key factors that shape large-scale patterns of diversification and richness across >80% of all extant, described species. PMID:28221832

  12. What Explains Patterns of Diversification and Richness among Animal Phyla?

    PubMed

    Jezkova, Tereza; Wiens, John J

    2017-03-01

    Animal phyla vary dramatically in species richness (from one species to >1.2 million), but the causes of this variation remain largely unknown. Animals have also evolved striking variation in morphology and ecology, including sessile marine taxa lacking heads, eyes, limbs, and complex organs (e.g., sponges), parasitic worms (e.g., nematodes, platyhelminths), and taxa with eyes, skeletons, limbs, and complex organs that dominate terrestrial ecosystems (arthropods, chordates). Relating this remarkable variation in traits to the diversification and richness of animal phyla is a fundamental yet unresolved problem in biology. Here, we test the impacts of 18 traits (including morphology, ecology, reproduction, and development) on diversification and richness of extant animal phyla. Using phylogenetic multiple regression, the best-fitting model includes five traits that explain ∼74% of the variation in diversification rates (dioecy, parasitism, eyes/photoreceptors, a skeleton, nonmarine habitat). However, a model including just three (skeleton, parasitism, habitat) explains nearly as much variation (∼67%). Diversification rates then largely explain richness patterns. Our results also identify many striking traits that have surprisingly little impact on diversification (e.g., head, limbs, and complex circulatory and digestive systems). Overall, our results reveal the key factors that shape large-scale patterns of diversification and richness across >80% of all extant, described species.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  14. A Case-Based, Problem-Based Learning Approach to Prepare Master of Public Health Candidates for the Complexities of Global Health

    PubMed Central

    Winskell, Kate; McFarland, Deborah A.; del Rio, Carlos

    2015-01-01

    Global health is a dynamic, emerging, and interdisciplinary field. To address current and emerging global health challenges, we need a public health workforce with adaptable and collaborative problem-solving skills. In the 2013–2014 academic year, the Hubert Department of Global Health at the Rollins School of Public Health–Emory University launched an innovative required core course for its first-year Master of Public Health students in the global health track. The course uses a case-based, problem-based learning approach to develop global health competencies. Small teams of students propose solutions to these problems by identifying learning issues and critically analyzing and synthesizing new information. We describe the course structure and logistics used to apply this approach in the context of a large class and share lessons learned. PMID:25706029

  15. Multi-trip vehicle routing and scheduling problem with time window in real life

    NASA Astrophysics Data System (ADS)

    Sze, San-Nah; Chiew, Kang-Leng; Sze, Jeeu-Fong

    2012-09-01

    This paper studies a manpower scheduling problem with multiple maintenance operations and vehicle routing considerations. Service teams located at a common service centre are required to travel to different customer sites. All customers must be served within given time window, which are known in advance. The scheduling process must take into consideration complex constraints such as a meal break during the team's shift, multiple travelling trips, synchronisation of service teams and working shifts. The main objective of this study is to develop a heuristic that can generate high quality solution in short time for large problem instances. A Two-stage Scheduling Heuristic is developed for different variants of the problem. Empirical results show that the proposed solution performs effectively and efficiently. In addition, our proposed approximation algorithm is very flexible and can be easily adapted to different scheduling environments and operational requirements.

  16. Patch planting of hard spin-glass problems: Getting ready for the next generation of optimization approaches

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong; Mandrà, Salvatore; Katzgraber, Helmut

    We propose a patch planting heuristic that allows us to create arbitrarily-large Ising spin-glass instances on any topology and with any type of disorder, and where the exact ground-state energy of the problem is known by construction. By breaking up the problem into patches that can be treated either with exact or heuristic solvers, we can reconstruct the optimum of the original, considerably larger, problem. The scaling of the computational complexity of these instances with various patch numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and quantum annealing on the D-Wave 2X quantum annealer. The method can be useful for benchmarking of novel computing technologies and algorithms. NSF-DMR-1208046 and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via MIT Lincoln Laboratory Air Force Contract No. FA8721-05-C-0002.

  17. Ground-water flow in low permeability environments

    USGS Publications Warehouse

    Neuzil, Christopher E.

    1986-01-01

    Certain geologic media are known to have small permeability; subsurface environments composed of these media and lacking well developed secondary permeability have groundwater flow sytems with many distinctive characteristics. Moreover, groundwater flow in these environments appears to influence the evolution of certain hydrologic, geologic, and geochemical systems, may affect the accumulation of pertroleum and ores, and probably has a role in the structural evolution of parts of the crust. Such environments are also important in the context of waste disposal. This review attempts to synthesize the diverse contributions of various disciplines to the problem of flow in low-permeability environments. Problems hindering analysis are enumerated together with suggested approaches to overcoming them. A common thread running through the discussion is the significance of size- and time-scale limitations of the ability to directly observe flow behavior and make measurements of parameters. These limitations have resulted in rather distinct small- and large-scale approaches to the problem. The first part of the review considers experimental investigations of low-permeability flow, including in situ testing; these are generally conducted on temporal and spatial scales which are relatively small compared with those of interest. Results from this work have provided increasingly detailed information about many aspects of the flow but leave certain questions unanswered. Recent advances in laboratory and in situ testing techniques have permitted measurements of permeability and storage properties in progressively “tighter” media and investigation of transient flow under these conditions. However, very large hydraulic gradients are still required for the tests; an observational gap exists for typical in situ gradients. The applicability of Darcy's law in this range is therefore untested, although claims of observed non-Darcian behavior appear flawed. Two important nonhydraulic flow phenomena, osmosis and ultrafiltration, are experimentally well established in prepared clays but have been incompletely investigated, particularly in undisturbed geologic media. Small-scale experimental results form much of the basis for analyses of flow in low-permeability environments which occurs on scales of time and size too large to permit direct observation. Such large-scale flow behavior is the focus of the second part of the review. Extrapolation of small-scale experimental experience becomes an important and sometimes controversial problem in this context. In large flow systems under steady state conditions the regional permeability can sometimes be determined, but systems with transient flow are more difficult to analyze. The complexity of the problem is enhanced by the sensitivity of large-scale flow to the effects of slow geologic processes. One-dimensional studies have begun to elucidate how simple burial or exhumation can generate transient flow conditions by changing the state of stress and temperature and by burial metamorphism. Investigation of the more complex problem of the interaction of geologic processes and flow in two and three dimensions is just beginning. Because these transient flow analyses have largely been based on flow in experimental scale systems or in relatively permeable systems, deformation in response to effective stress changes is generally treated as linearly elastic; however, this treatment creates difficulties for the long periods of interest because viscoelastic deformation is probably significant. Also, large-scale flow simulations in argillaceous environments generally have neglected osmosis and ultrafiltration, in part because extrapolation of laboratory experience with coupled flow to large scales under in situ conditions is controversial. Nevertheless, the effects are potentially quite important because the coupled flow might cause ultra long lived transient conditions. The difficulties associated with analysis are matched by those of characterizing hydrologic conditions in tight environments; measurements of hydraulic head and sampling of pore fluids have been done only rarely because of the practical difficulties involved. These problems are also discussed in the second part of this paper.

  18. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.

  19. Computation of eigenpairs of Ax = lambda Bx for vibrations of spinning deformable bodies

    NASA Technical Reports Server (NTRS)

    Utku, S.; Clemente, J. L. M.

    1984-01-01

    It is shown that, when linear theory is used, the general eigenvalue problem related with the free vibrations of spinning deformable bodies is of the type AX = lambda Bx, where A is Hermitian, and B is real positive definite. Since the order n of the matrices may be large, and A and B are banded or block banded, due to the economics of the numerical solution, one is interested in obtaining only those eigenvalues which fall within the frequency band of interest of the problem. The paper extends the well known method of bisections and iteration of R to the n power to n dimensional complex spaces, i.e., to C to the n power, so that it can be applied to the present problem.

  20. Mesoscale modeling: solving complex flows in biology and biotechnology.

    PubMed

    Mills, Zachary Grant; Mao, Wenbin; Alexeev, Alexander

    2013-07-01

    Fluids are involved in practically all physiological activities of living organisms. However, biological and biorelated flows are hard to analyze due to the inherent combination of interdependent effects and processes that occur on a multitude of spatial and temporal scales. Recent advances in mesoscale simulations enable researchers to tackle problems that are central for the understanding of such flows. Furthermore, computational modeling effectively facilitates the development of novel therapeutic approaches. Among other methods, dissipative particle dynamics and the lattice Boltzmann method have become increasingly popular during recent years due to their ability to solve a large variety of problems. In this review, we discuss recent applications of these mesoscale methods to several fluid-related problems in medicine, bioengineering, and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Genome-wide detection of intervals of genetic heterogeneity associated with complex traits

    PubMed Central

    Llinares-López, Felipe; Grimm, Dominik G.; Bodenham, Dean A.; Gieraths, Udo; Sugiyama, Mahito; Rowan, Beth; Borgwardt, Karsten

    2015-01-01

    Motivation: Genetic heterogeneity, the fact that several sequence variants give rise to the same phenotype, is a phenomenon that is of the utmost interest in the analysis of complex phenotypes. Current approaches for finding regions in the genome that exhibit genetic heterogeneity suffer from at least one of two shortcomings: (i) they require the definition of an exact interval in the genome that is to be tested for genetic heterogeneity, potentially missing intervals of high relevance, or (ii) they suffer from an enormous multiple hypothesis testing problem due to the large number of potential candidate intervals being tested, which results in either many false positives or a lack of power to detect true intervals. Results: Here, we present an approach that overcomes both problems: it allows one to automatically find all contiguous sequences of single nucleotide polymorphisms in the genome that are jointly associated with the phenotype. It also solves both the inherent computational efficiency problem and the statistical problem of multiple hypothesis testing, which are both caused by the huge number of candidate intervals. We demonstrate on Arabidopsis thaliana genome-wide association study data that our approach can discover regions that exhibit genetic heterogeneity and would be missed by single-locus mapping. Conclusions: Our novel approach can contribute to the genome-wide discovery of intervals that are involved in the genetic heterogeneity underlying complex phenotypes. Availability and implementation: The code can be obtained at: http://www.bsse.ethz.ch/mlcb/research/bioinformatics-and-computational-biology/sis.html. Contact: felipe.llinares@bsse.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072488

  2. Focusing on the golden ball metaheuristic: an extended study on a wider set of problems.

    PubMed

    Osaba, E; Diaz, F; Carballedo, R; Onieva, E; Perallos, A

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.

  3. Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems

    PubMed Central

    Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742

  4. Community detection in complex networks by using membrane algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren

    Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.

  5. Electrothermal atomisation atomic absorption conditions and matrix modifications for determining antimony, arsenic, bismuth, cadmium, gallium, gold, indium, lead, molybdenum, palladium, platinum, selenium, silver, tellurium, thallium and tin following back-extraction of organic aminohalide extracts

    USGS Publications Warehouse

    Clark, J.R.

    1986-01-01

    A multi-element organic-extraction and back-extraction procedure, that had been developed previously to eliminate matrix interferences in the determination of a large number of trace elements in complex materials such as geological samples, produced organic and aqueous solutions that were complex. Electrothermal atomisation atomic absorption conditions and matrix modifications have been developed for 13 of the extracted elements (Ag, As, Au, Bi, Cd, Ga, In, Pb, Sb, Se, Sn, Te and Tl) that enhance sensitivity, alleviate problems resulting from the complex solutions and produce acceptable precision. Platinum, Pd and Mo can be determined without matrix modification directly on the original unstripped extracts.

  6. Solving Complex Problems: A Convergent Approach to Cognitive Load Measurement

    ERIC Educational Resources Information Center

    Zheng, Robert; Cook, Anne

    2012-01-01

    The study challenged the current practices in cognitive load measurement involving complex problem solving by manipulating the presence of pictures in multiple rule-based problem-solving situations and examining the cognitive load resulting from both off-line and online measures associated with complex problem solving. Forty-eight participants…

  7. Temperament and problem solving in a population of adolescent guide dogs.

    PubMed

    Bray, Emily E; Sammel, Mary D; Seyfarth, Robert M; Serpell, James A; Cheney, Dorothy L

    2017-09-01

    It is often assumed that measures of temperament within individuals are more correlated to one another than to measures of problem solving. However, the exact relationship between temperament and problem-solving tasks remains unclear because large-scale studies have typically focused on each independently. To explore this relationship, we tested 119 prospective adolescent guide dogs on a battery of 11 temperament and problem-solving tasks. We then summarized the data using both confirmatory factor analysis and exploratory principal components analysis. Results of confirmatory analysis revealed that a priori separation of tests as measuring either temperament or problem solving led to weak results, poor model fit, some construct validity, and no predictive validity. In contrast, results of exploratory analysis were best summarized by principal components that mixed temperament and problem-solving traits. These components had both construct and predictive validity (i.e., association with success in the guide dog training program). We conclude that there is complex interplay between tasks of "temperament" and "problem solving" and that the study of both together will be more informative than approaches that consider either in isolation.

  8. From problem solving to problem definition: scrutinizing the complex nature of clinical practice.

    PubMed

    Cristancho, Sayra; Lingard, Lorelei; Regehr, Glenn

    2017-02-01

    In medical education, we have tended to present problems as being singular, stable, and solvable. Problem solving has, therefore, drawn much of medical education researchers' attention. This focus has been important but it is limited in terms of preparing clinicians to deal with the complexity of the 21st century healthcare system in which they will provide team-based care for patients with complex medical illness. In this paper, we use the Soft Systems Engineering principles to introduce the idea that in complex, team-based situations, problems usually involve divergent views and evolve with multiple solution iterations. As such we need to shift the conversation from (1) problem solving to problem definition, and (2) from a problem definition derived exclusively at the level of the individual to a definition derived at the level of the situation in which the problem is manifested. Embracing such a focus on problem definition will enable us to advocate for novel educational practices that will equip trainees to effectively manage the problems they will encounter in complex, team-based healthcare.

  9. Generating unstructured nuclear reactor core meshes in parallel

    DOE PAGES

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less

  10. Examining the social ecology of a bar-crawl: An exploratory pilot study.

    PubMed

    Clapp, John D; Madden, Danielle R; Mooney, Douglas D; Dahlquist, Kristin E

    2017-01-01

    Many of the problems associated with alcohol occur after a single drinking event (e.g. drink driving, assault). These acute alcohol problems have a huge global impact and account for a large percentage of unintentional and intentional injuries in the world. Nonetheless, alcohol research and preventive interventions rarely focus on drinking at the event-level since drinking events are complex, dynamic, and methodologically challenging to observe. This exploratory study provides an example of how event-level data may be collected, analyzed, and interpreted. The drinking behavior of twenty undergraduate students enrolled at a large Midwestern public university was observed during a single bar crawl event that is organized by students annually. Alcohol use was monitored with transdermal alcohol devices coupled with ecological momentary assessments and geospatial data. "Small N, Big Data" studies have the potential to advance health behavior theory and to guide real-time interventions. However, such studies generate large amounts of within subject data that can be challenging to analyze and present. This study examined how to visually display event-level data and also explored the relationship between some basic indicators and alcohol consumption.

  11. Prevention and treatment of long-term social disability amongst young people with emerging severe mental illness with social recovery therapy (The PRODIGY Trial): study protocol for a randomised controlled trial.

    PubMed

    Fowler, David; French, Paul; Banerjee, Robin; Barton, Garry; Berry, Clio; Byrne, Rory; Clarke, Timothy; Fraser, Rick; Gee, Brioney; Greenwood, Kathryn; Notley, Caitlin; Parker, Sophie; Shepstone, Lee; Wilson, Jon; Yung, Alison R; Hodgekins, Joanne

    2017-07-11

    Young people who have social disability associated with severe and complex mental health problems are an important group in need of early intervention. Their problems often date back to childhood and become chronic at an early age. Without intervention, the long-term prognosis is often poor and the economic costs very large. There is a major gap in the provision of evidence-based interventions for this group, and therefore new approaches to detection and intervention are needed. This trial provides a definitive evaluation of a new approach to early intervention with young people with social disability and severe and complex mental health problems using social recovery therapy (SRT) over a period of 9 months to improve mental health and social recovery outcomes. This is a pragmatic, multi-centre, single blind, superiority randomised controlled trial. It is conducted in three sites in the UK: Sussex, Manchester and East Anglia. Participants are aged 16 to 25 and have both persistent and severe social disability (defined as engaged in less than 30 hours per week of structured activity) and severe and complex mental health problems. The target sample size is 270 participants, providing 135 participants in each trial arm. Participants are randomised 1:1 using a web-based randomisation system and allocated to either SRT plus optimised treatment as usual (enhanced standard care) or enhanced standard care alone. The primary outcome is time use, namely hours spent in structured activity per week at 15 months post-randomisation. Secondary outcomes assess typical mental health problems of the group, including subthreshold psychotic symptoms, negative symptoms, depression and anxiety. Time use, secondary outcomes and health economic measures are assessed at 9, 15 and 24 months post-randomisation. This definitive trial will be the first to evaluate a novel psychological treatment for social disability and mental health problems in young people presenting with social disability and severe and complex non-psychotic mental health problems. The results will have important implications for policy and practice in the detection and early intervention for this group in mental health services. Trial Registry: International Standard Randomised Controlled Trial Number (ISRCTN) Registry. ISRCTN47998710 (registered 29/11/2012).

  12. A distributed finite-element modeling and control approach for large flexible structures

    NASA Technical Reports Server (NTRS)

    Young, K. D.

    1989-01-01

    An unconventional framework is described for the design of decentralized controllers for large flexible structures. In contrast to conventional control system design practice which begins with a model of the open loop plant, the controlled plant is assembled from controlled components in which the modeling phase and the control design phase are integrated at the component level. The developed framework is called controlled component synthesis (CCS) to reflect that it is motivated by the well developed Component Mode Synthesis (CMS) methods which were demonstrated to be effective for solving large complex structural analysis problems for almost three decades. The design philosophy behind CCS is also closely related to that of the subsystem decomposition approach in decentralized control.

  13. 3-D imaging of large scale buried structure by 1-D inversion of very early time electromagnetic (VETEM) data

    USGS Publications Warehouse

    Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2001-01-01

    A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.

  14. Igniting the Light Elements: The Los Alamos Thermonuclear Weapon Project, 1942-1952

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitzpatrick, Anne C.

    1999-07-01

    The American system of nuclear weapons research and development was conceived and developed not as a result of technological determinism, but by a number of individual architects who promoted the growth of this large technologically-based complex. While some of the technological artifacts of this system, such as the fission weapons used in World War II, have been the subject of many historical studies, their technical successors--fusion (or hydrogen) devices--are representative of the largely unstudied highly secret realms of nuclear weapons science and engineering. In the postwar period a small number of Los Alamos Scientific Laboratory's staff and affiliates were responsiblemore » for theoretical work on fusion weapons, yet the program was subject to both the provisions and constraints of the US Atomic Energy Commission, of which Los Alamos was a part. The Commission leadership's struggle to establish a mission for its network of laboratories, least of all to keep them operating, affected Los Alamos's leaders' decisions as to the course of weapons design and development projects. Adapting Thomas P. Hughes's ''large technological systems'' thesis, I focus on the technical, social, political, and human problems that nuclear weapons scientists faced while pursuing the thermonuclear project, demonstrating why the early American thermonuclear bomb project was an immensely complicated scientific and technological undertaking. I concentrate mainly on Los Alamos Scientific Laboratory's Theoretical, or T, Division, and its members' attempts to complete an accurate mathematical treatment of the ''Super''--the most difficult problem in physics in the postwar period--and other fusion weapon theories. Although tackling a theoretical problem, theoreticians had to address technical and engineering issues as well. I demonstrate the relative value and importance of H-bomb research over time in the postwar era to scientific, politician, and military participants in this project. I analyze how and when participants in the H-bomb project recognized both blatant and subtle problems facing the project, how scientists solved them, and the relationship this process had to official nuclear weapons policies. Consequently, I show how the practice of nuclear weapons science in the postwar period became an extremely complex, technologically-based endeavor.« less

  15. Viewpoints: Interactive Exploration of Large Multivariate Earth and Space Science Data Sets

    NASA Astrophysics Data System (ADS)

    Levit, C.; Gazis, P. R.

    2006-05-01

    Analysis and visualization of extremely large and complex data sets may be one of the most significant challenges facing earth and space science investigators in the forthcoming decades. While advances in hardware speed and storage technology have roughly kept up with (indeed, have driven) increases in database size, the same is not of our abilities to manage the complexity of these data. Current missions, instruments, and simulations produce so much data of such high dimensionality that they outstrip the capabilities of traditional visualization and analysis software. This problem can only be expected to get worse as data volumes increase by orders of magnitude in future missions and in ever-larger supercomputer simulations. For large multivariate data (more than 105 samples or records with more than 5 variables per sample) the interactive graphics response of most existing statistical analysis, machine learning, exploratory data analysis, and/or visualization tools such as Torch, MLC++, Matlab, S++/R, and IDL stutters, stalls, or stops working altogether. Fortunately, the graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform application which leverages much of the power latent in the GPU to enable smooth interactive exploration and analysis of large high- dimensional data using a variety of classical and recent techniques. The targeted application is the interactive analysis of large, complex, multivariate data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 106-108.

  16. Reliability Standards of Complex Engineering Systems

    NASA Astrophysics Data System (ADS)

    Galperin, E. M.; Zayko, V. A.; Gorshkalev, P. A.

    2017-11-01

    Production and manufacture play an important role in today’s modern society. Industrial production is nowadays characterized by increased and complex communications between its parts. The problem of preventing accidents in a large industrial enterprise becomes especially relevant. In these circumstances, the reliability of enterprise functioning is of particular importance. Potential damage caused by an accident at such enterprise may lead to substantial material losses and, in some cases, can even cause a loss of human lives. That is why industrial enterprise functioning reliability is immensely important. In terms of their reliability, industrial facilities (objects) are divided into simple and complex. Simple objects are characterized by only two conditions: operable and non-operable. A complex object exists in more than two conditions. The main characteristic here is the stability of its operation. This paper develops the reliability indicator combining the set theory methodology and a state space method. Both are widely used to analyze dynamically developing probability processes. The research also introduces a set of reliability indicators for complex technical systems.

  17. Combining local search with co-evolution in a remarkably simple way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.

    2000-05-01

    The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less

  18. Implicit Geometry Meshing for the simulation of Rotary Friction Welding

    NASA Astrophysics Data System (ADS)

    Schmicker, D.; Persson, P.-O.; Strackeljan, J.

    2014-08-01

    The simulation of Rotary Friction Welding (RFW) is a challenging task, since it states a coupled problem of phenomena like large plastic deformations, heat flux, contact and friction. In particular the mesh generation and its restoration when using a Lagrangian description of motion is of significant severity. In this regard Implicit Geometry Meshing (IGM) algorithms are promising alternatives to the more conventional explicit methods. Because of the implicit description of the geometry during remeshing, the IGM procedure turns out to be highly robust and generates spatial discretizations of high quality regardless of the complexity of the flash shape and its inclusions. A model for efficient RFW simulation is presented, which is based on a Carreau fluid law, an Augmented Lagrange approach in mapping the incompressible deformations, a penalty contact approach, a fully regularized Coulomb-/fluid friction law and a hybrid time integration strategy. The implementation of the IGM algorithm using 6-node triangular finite elements is described in detail. The techniques are demonstrated on a fairly complex friction welding problem, demonstrating the performance and the potentials of the proposed method. The techniques are general and straight-forward to implement, and offer the potential of successful adoption to a wide range of other engineering problems.

  19. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  20. A novel CFS-PML boundary condition for transient electromagnetic simulation using a fictitious wave domain method

    NASA Astrophysics Data System (ADS)

    Hu, Yanpu; Egbert, Gary; Ji, Yanju; Fang, Guangyou

    2017-01-01

    In this study, we apply fictitious wave domain (FWD) methods, based on the correspondence principle for the wave and diffusion fields, to finite difference (FD) modeling of transient electromagnetic (TEM) diffusion problems for geophysical applications. A novel complex frequency shifted perfectly matched layer (PML) boundary condition is adapted to the FWD to truncate the computational domain, with the maximum electromagnetic wave propagation velocity in the FWD used to set the absorbing parameters for the boundary layers. Using domains of varying spatial extent we demonstrate that these boundary conditions offer significant improvements over simpler PML approaches, which can result in spurious reflections and large errors in the FWD solutions, especially for low frequencies and late times. In our development, resistive air layers are directly included in the FWD, allowing simulation of TEM responses in the presence of topography, as is commonly encountered in geophysical applications. We compare responses obtained by our new FD-FWD approach and with the spectral Lanczos decomposition method on 3-D resistivity models of varying complexity. The comparisons demonstrate that our absorbing boundary condition in FWD for the TEM diffusion problems works well even in complex high-contrast conductivity models.

  1. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  2. Using VCL as an Aspect-Oriented Approach to Requirements Modelling

    NASA Astrophysics Data System (ADS)

    Amálio, Nuno; Kelsen, Pierre; Ma, Qin; Glodt, Christian

    Software systems are becoming larger and more complex. By tackling the modularisation of crosscutting concerns, aspect orientation draws attention to modularity as a means to address the problems of scalability, complexity and evolution in software systems development. Aspect-oriented modelling (AOM) applies aspect-orientation to the construction of models. Most existing AOM approaches are designed without a formal semantics, and use multi-view partial descriptions of behaviour. This paper presents an AOM approach based on the Visual Contract Language (VCL): a visual language for abstract and precise modelling, designed with a formal semantics, and comprising a novel approach to visual behavioural modelling based on design by contract where behavioural descriptions are total. By applying VCL to a large case study of a car-crash crisis management system, the paper demonstrates how modularity of VCL's constructs, at different levels of granularity, help to tackle complexity. In particular, it shows how VCL's package construct and its associated composition mechanisms are key in supporting separation of concerns, coarse-grained problem decomposition and aspect-orientation. The case study's modelling solution has a clear and well-defined modular structure; the backbone of this structure is a collection of packages encapsulating local solutions to concerns.

  3. Reproducing the scaling laws for Slow and Fast ruptures

    NASA Astrophysics Data System (ADS)

    Romanet, Pierre; Bhat, Harsha; Madariaga, Raúl

    2017-04-01

    Modelling long term behaviour of large, natural fault systems, that are geometrically complex, is a challenging problem. This is why most of the research so far has concentrated on modelling the long term response of single planar fault system. To overcome this limitation, we appeal to a novel algorithm called the Fast Multipole Method which was developed in the context of modelling gravitational N-body problems. This method allows us to decrease the computational complexity of the calculation from O(N2) to O(N log N), N being the number of discretised elements on the fault. We then adapted this method to model the long term quasi-dynamic response of two faults, with step-over like geometry, that are governed by rate and state friction laws. We assume the faults have spatially uniform rate weakening friction. The results show that when stress interaction between faults is accounted, a complex spectrum of slip (including slow-slip events, dynamic ruptures and partial ruptures) emerges naturally. The simulated slow-slip and dynamic events follow the scaling law inferred by Ide et al. 2007 i. e. M ∝ T for slow-slip events and M ∝ T2 (in 2D) for dynamic events.

  4. Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1999-01-01

    The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.

  5. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolinski, B.

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less

  6. Dynamic programming methods for concurrent design and dynamic allocation of vehicles embedded in a system-of-systems

    NASA Astrophysics Data System (ADS)

    Nusawardhana

    2007-12-01

    Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.

  7. Multi-level systems modeling and optimization for novel aircraft

    NASA Astrophysics Data System (ADS)

    Subramanian, Shreyas Vathul

    This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission best achieved via a large collection of interacting simple systems, or a relatively few highly capable, complex air vehicles). The vastly unexplored area of optimization in evolving design spaces will be studied and incorporated into the SoS optimization framework. We envision a framework that resembles a multi-level, mult-fidelity, multi-disciplinary assemblage of optimization problems. The challenge is not simply one of scaling up to a new level (the SoS), but recognizing that the aircraft sub-systems and the integrated vehicle are now intensely cyber-physical, with hardware and software components interacting in complex ways that give rise to new and improved capabilities. The work presented here is a step closer to modeling the information flow that exists in realistic SoS optimization problems between sub-contractors, contractors and the SoS architect.

  8. Final Report of the Project "From the finite element method to the virtual element method"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco; Gyrya, Vitaliy

    The Finite Element Method (FEM) is a powerful numerical tool that is being used in a large number of engineering applications. The FEM is constructed on triangular/tetrahedral and quadrilateral/hexahedral meshes. Extending the FEM to general polygonal/polyhedral meshes in straightforward way turns out to be extremely difficult and leads to very complex and computationally expensive schemes. The reason for this failure is that the construction of the basis functions on elements with a very general shape is a non-trivial and complex task. In this project we developed a new family of numerical methods, dubbed the Virtual Element Method (VEM) for themore » numerical approximation of partial differential equations (PDE) of elliptic type suitable to polygonal and polyhedral unstructured meshes. We successfully formulated, implemented and tested these methods and studied both theoretically and numerically their stability, robustness and accuracy for diffusion problems, convection-reaction-diffusion problems, the Stokes equations and the biharmonic equations.« less

  9. Immunology for physicists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelson, A.S.; Weisbuch, G.

    1997-10-01

    The immune system is a complex system of cells and molecules that can provide us with a basic defense against pathogenic organisms. Like the nervous system, the immune system performs pattern recognition tasks, learns, and retains a memory of the antigens that it has fought. The immune system contains more than 10{sup 7} different clones of cells that communicate via cell-cell contact and the secretion of molecules. Performing complex tasks such as learning and memory involves cooperation among large numbers of components of the immune system and hence there is interest in using methods and concepts from statistical physics. Furthermore,more » the immune response develops in time and the description of its time evolution is an interesting problem in dynamical systems. In this paper, the authors provide a brief introduction to the biology of the immune system and discuss a number of immunological problems in which the use of physical concepts and mathematical methods has increased our understanding. {copyright} {ital 1997} {ital The American Physical Society}« less

  10. Etiology, Treatment and Prevention of Obesity in Childhood and Adolescence: A Decade in Review

    PubMed Central

    Spruijt-Metz, Donna

    2010-01-01

    Childhood obesity has become an epidemic on a worldwide scale. This article gives an overview of the progress made in childhood and adolescent obesity research in the last decade, with a particular emphasis on the transdisciplinary and complex nature of the problem. The following topics are addressed: 1) current definitions of childhood and adolescent overweight and obesity; 2) demography of childhood and adolescent obesity both in the US and globally; 3) current topics in the physiology of fat and obesity; 4) psychosocial correlates of childhood and adolescent overweight and obesity; 5) the three major obesity-related behaviors, i.e. dietary intake, physical activity and sleep; 6) genes components of childhood and adolescent obesity; 7) environment and childhood and adolescent obesity; and 8) progress in interventions to prevent and treat childhood obesity. The article concludes with recommendations for future research, including the need for large-scale, high dose and long-term interventions that take into account the complex nature of the problem. PMID:21625328

  11. Applications of artificial intelligence to mission planning

    NASA Technical Reports Server (NTRS)

    Ford, Donnie R.; Rogers, John S.; Floyd, Stephen A.

    1990-01-01

    The scheduling problem facing NASA-Marshall mission planning is extremely difficult for several reasons. The most critical factor is the computational complexity involved in developing a schedule. The size of the search space is large along some dimensions and infinite along others. It is because of this and other difficulties that many of the conventional operation research techniques are not feasible or inadequate to solve the problems by themselves. Therefore, the purpose is to examine various artificial intelligence (AI) techniques to assist conventional techniques or to replace them. The specific tasks performed were as follows: (1) to identify mission planning applications for object oriented and rule based programming; (2) to investigate interfacing AI dedicated hardware (Lisp machines) to VAX hardware; (3) to demonstrate how Lisp may be called from within FORTRAN programs; (4) to investigate and report on programming techniques used in some commercial AI shells, such as Knowledge Engineering Environment (KEE); and (5) to study and report on algorithmic methods to reduce complexity as related to AI techniques.

  12. Case management: a case study.

    PubMed

    Stanton, M P; Walizer, E M; Graham, J I; Keppel, L

    2000-01-01

    This article describes the implementation of a pilot case management program at Walter Reed Army Medical Center. I, it we discuss obvious pitfalls and problems implementing case management in a large multiservice center and the steps and processes implemented to expedite and move case management forward in its early stages. The insights shared may be useful for those implementing case management in a complex medical center situation. Other models used in similar situations are also reviewed.

  13. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  14. The University and the Municipality: Summary of Proceedings of the First Session of the National Association of Municipal Universities. Bulletin, 1915, No. 38. Whole Number 665

    ERIC Educational Resources Information Center

    United States Bureau of Education, Department of the Interior, 1915

    1915-01-01

    The problems of industry, government, and life in the modern industrial and commercial city are numerous, large, and complex. For their solution a larger amount of scientific knowledge and higher standards of intelligence among citizens are needed. All the city's agencies for good and progress need to be united and vitalized for more effective…

  15. Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing

    DTIC Science & Technology

    2006-11-01

    in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and

  16. The statistical power to detect cross-scale interactions at macroscales

    USGS Publications Warehouse

    Wagner, Tyler; Fergus, C. Emi; Stow, Craig A.; Cheruvelil, Kendra S.; Soranno, Patricia A.

    2016-01-01

    Macroscale studies of ecological phenomena are increasingly common because stressors such as climate and land-use change operate at large spatial and temporal scales. Cross-scale interactions (CSIs), where ecological processes operating at one spatial or temporal scale interact with processes operating at another scale, have been documented in a variety of ecosystems and contribute to complex system dynamics. However, studies investigating CSIs are often dependent on compiling multiple data sets from different sources to create multithematic, multiscaled data sets, which results in structurally complex, and sometimes incomplete data sets. The statistical power to detect CSIs needs to be evaluated because of their importance and the challenge of quantifying CSIs using data sets with complex structures and missing observations. We studied this problem using a spatially hierarchical model that measures CSIs between regional agriculture and its effects on the relationship between lake nutrients and lake productivity. We used an existing large multithematic, multiscaled database, LAke multiscaled GeOSpatial, and temporal database (LAGOS), to parameterize the power analysis simulations. We found that the power to detect CSIs was more strongly related to the number of regions in the study rather than the number of lakes nested within each region. CSI power analyses will not only help ecologists design large-scale studies aimed at detecting CSIs, but will also focus attention on CSI effect sizes and the degree to which they are ecologically relevant and detectable with large data sets.

  17. Thermal energy storage to minimize cost and improve efficiency of a polygeneration district energy system in a real-time electricity market

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Kody M.; Kim, Jong Suk; Cole, Wesley J.

    2016-10-01

    District energy systems can produce low-cost utilities for large energy networks, but can also be a resource for the electric grid by their ability to ramp production or to store thermal energy by responding to real-time market signals. In this work, dynamic optimization exploits the flexibility of thermal energy storage by determining optimal times to store and extract excess energy. This concept is applied to a polygeneration distributed energy system with combined heat and power, district heating, district cooling, and chilled water thermal energy storage. The system is a university campus responsible for meeting the energy needs of tens ofmore » thousands of people. The objective for the dynamic optimization problem is to minimize cost over a 24-h period while meeting multiple loads in real time. The paper presents a novel algorithm to solve this dynamic optimization problem with energy storage by decomposing the problem into multiple static mixed-integer nonlinear programming (MINLP) problems. Another innovative feature of this work is the study of a large, complex energy network which includes the interrelations of a wide variety of energy technologies. Results indicate that a cost savings of 16.5% is realized when the system can participate in the wholesale electricity market.« less

  18. Solving the Secondary Structure Matching Problem in Cryo-EM De Novo Modeling Using a Constrained K-Shortest Path Graph Algorithm.

    PubMed

    Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing

    2014-01-01

    Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.

  19. Toward Modeling the Intrinsic Complexity of Test Problems

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi

    2017-01-01

    The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…

  20. Investigating the Effect of Complexity Factors in Stoichiometry Problems Using Logistic Regression and Eye Tracking

    ERIC Educational Resources Information Center

    Tang, Hui; Kirk, John; Pienta, Norbert J.

    2014-01-01

    This paper includes two experiments, one investigating complexity factors in stoichiometry word problems, and the other identifying students' problem-solving protocols by using eye-tracking technology. The word problems used in this study had five different complexity factors, which were randomly assigned by a Web-based tool that we developed. The…

  1. A new uniformly valid asymptotic integration algorithm for elasto-plastic creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, Abhisak; Walker, Kevin P.

    1991-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  2. A Portable Computer System for Auditing Quality of Ambulatory Care

    PubMed Central

    McCoy, J. Michael; Dunn, Earl V.; Borgiel, Alexander E.

    1987-01-01

    Prior efforts to effectively and efficiently audit quality of ambulatory care based on comprehensive process criteria have been limited largely by the complexity and cost of data abstraction and management. Over the years, several demonstration projects have generated large sets of process criteria and mapping systems for evaluating quality of care, but these paper-based approaches have been impractical to implement on a routine basis. Recognizing that portable microcomputers could solve many of the technical problems in abstracting data from medical records, we built upon previously described criteria and developed a microcomputer-based abstracting system that facilitates reliable and cost-effective data abstraction.

  3. GPU Accelerated DG-FDF Large Eddy Simulator

    NASA Astrophysics Data System (ADS)

    Inkarbekov, Medet; Aitzhan, Aidyn; Sammak, Shervin; Givi, Peyman; Kaltayev, Aidarkhan

    2017-11-01

    A GPU accelerated simulator is developed and implemented for large eddy simulation (LES) of turbulent flows. The filtered density function (FDF) is utilized for modeling of the subgrid scale quantities. The filtered transport equations are solved via a discontinuous Galerkin (DG) and the FDF is simulated via particle based Lagrangian Monte-Carlo (MC) method. It is demonstrated that the GPUs simulations are of the order of 100 times faster than the CPU-based calculations. This brings LES of turbulent flows to a new level, facilitating efficient simulation of more complex problems. The work at Al-Faraby Kazakh National University is sponsored by MoES of RK under Grant 3298/GF-4.

  4. Stream computing for biomedical signal processing: A QRS complex detection case-study.

    PubMed

    Murphy, B M; O'Driscoll, C; Boylan, G B; Lightbody, G; Marnane, W P

    2015-01-01

    Recent developments in "Big Data" have brought significant gains in the ability to process large amounts of data on commodity server hardware. Stream computing is a relatively new paradigm in this area, addressing the need to process data in real time with very low latency. While this approach has been developed for dealing with large scale data from the world of business, security and finance, there is a natural overlap with clinical needs for physiological signal processing. In this work we present a case study of streams processing applied to a typical physiological signal processing problem: QRS detection from ECG data.

  5. A new uniformly valid asymptotic integration algorithm for elasto-plastic-creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, A.; Walker, K. P.

    1989-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  6. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  7. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  8. Prediction of Human Activity by Discovering Temporal Sequence Patterns.

    PubMed

    Li, Kang; Fu, Yun

    2014-08-01

    Early prediction of ongoing human activity has become more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions and interacting objects. Different from early detection on short-duration simple actions, we propose a novel framework for long -duration complex activity prediction by discovering three key aspects of activity: Causality, Context-cue, and Predictability. The major contributions of our work include: (1) a general framework is proposed to systematically address the problem of complex activity prediction by mining temporal sequence patterns; (2) probabilistic suffix tree (PST) is introduced to model causal relationships between constituent actions, where both large and small order Markov dependencies between action units are captured; (3) the context-cue, especially interactive objects information, is modeled through sequential pattern mining (SPM), where a series of action and object co-occurrence are encoded as a complex symbolic sequence; (4) we also present a predictive accumulative function (PAF) to depict the predictability of each kind of activity. The effectiveness of our approach is evaluated on two experimental scenarios with two data sets for each: action-only prediction and context-aware prediction. Our method achieves superior performance for predicting global activity classes and local action units.

  9. Stability-to-instability transition in the structure of large-scale networks

    NASA Astrophysics Data System (ADS)

    Hu, Dandan; Ronhovde, Peter; Nussinov, Zohar

    2012-12-01

    We examine phase transitions between the “easy,” “hard,” and “unsolvable” phases when attempting to identify structure in large complex networks (“community detection”) in the presence of disorder induced by network “noise” (spurious links that obscure structure), heat bath temperature T, and system size N. The partition of a graph into q optimally disjoint subgraphs or “communities” inherently requires Potts-type variables. In earlier work [Philos. Mag.1478-643510.1080/14786435.2011.616547 92, 406 (2012)], when examining power law and other networks (and general associated Potts models), we illustrated that transitions in the computational complexity of the community detection problem typically correspond to spin-glass-type transitions (and transitions to chaotic dynamics in mechanical analogs) at both high and low temperatures and/or noise. The computationally “hard” phase exhibits spin-glass type behavior including memory effects. The region over which the hard phase extends in the noise and temperature phase diagram decreases as N increases while holding the average number of nodes per community fixed. This suggests that in the thermodynamic limit a direct sharp transition may occur between the easy and unsolvable phases. When present, transitions at low temperature or low noise correspond to entropy driven (or “order by disorder”) annealing effects, wherein stability may initially increase as temperature or noise is increased before becoming unsolvable at sufficiently high temperature or noise. Additional transitions between contending viable solutions (such as those at different natural scales) are also possible. Identifying community structure via a dynamical approach where “chaotic-type” transitions were found earlier. The correspondence between the spin-glass-type complexity transitions and transitions into chaos in dynamical analogs might extend to other hard computational problems. In this work, we examine large networks (with a power law distribution in cluster size) that have a large number of communities (q≫1). We infer that large systems at a constant ratio of q to the number of nodes N asymptotically tend towards insolvability in the limit of large N for any positive T. The asymptotic behavior of temperatures below which structure identification might be possible, T×=O[1/lnq], decreases slowly, so for practical system sizes, there remains an accessible, and generally easy, global solvable phase at low temperature. We further employ multivariate Tutte polynomials to show that increasing q emulates increasing T for a general Potts model, leading to a similar stability region at low T. Given the relation between Tutte and Jones polynomials, our results further suggest a link between the above complexity transitions and transitions associated with random knots.

  10. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  11. Multigrid Methods for Aerodynamic Problems in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Caughey, David A.

    1995-01-01

    Work has been directed at the development of efficient multigrid methods for the solution of aerodynamic problems involving complex geometries, including the development of computational methods for the solution of both inviscid and viscous transonic flow problems. The emphasis is on problems of complex, three-dimensional geometry. The methods developed are based upon finite-volume approximations to both the Euler and the Reynolds-Averaged Navier-Stokes equations. The methods are developed for use on multi-block grids using diagonalized implicit multigrid methods to achieve computational efficiency. The work is focused upon aerodynamic problems involving complex geometries, including advanced engine inlets.

  12. Distributed Coordination of Heterogeneous Agents Using a Semantic Overlay Network and a Goal-Directed Graphplan Planner

    PubMed Central

    Lopes, António Luís; Botelho, Luís Miguel

    2013-01-01

    In this paper, we describe a distributed coordination system that allows agents to seamlessly cooperate in problem solving by partially contributing to a problem solution and delegating the subproblems for which they do not have the required skills or knowledge to appropriate agents. The coordination mechanism relies on a dynamically built semantic overlay network that allows the agents to efficiently locate, even in very large unstructured networks, the necessary skills for a specific problem. Each agent performs partial contributions to the problem solution using a new distributed goal-directed version of the Graphplan algorithm. This new goal-directed version of the original Graphplan algorithm provides an efficient solution to the problem of "distraction", which most forward-chaining algorithms suffer from. We also discuss a set of heuristics to be used in the backward-search process of the planning algorithm in order to distribute this process amongst idle agents in an attempt to find a solution in less time. The evaluation results show that our approach is effective in building a scalable and efficient agent society capable of solving complex distributable problems. PMID:23704885

  13. DockTrina: docking triangular protein trimers.

    PubMed

    Popov, Petr; Ritchie, David W; Grudinin, Sergei

    2014-01-01

    In spite of the abundance of oligomeric proteins within a cell, the structural characterization of protein-protein interactions is still a challenging task. In particular, many of these interactions involve heteromeric complexes, which are relatively difficult to determine experimentally. Hence there is growing interest in using computational techniques to model such complexes. However, assembling large heteromeric complexes computationally is a highly combinatorial problem. Nonetheless the problem can be simplified greatly by considering interactions between protein trimers. After dimers and monomers, triangular trimers (i.e. trimers with pair-wise contacts between all three pairs of proteins) are the most frequently observed quaternary structural motifs according to the three-dimensional (3D) complex database. This article presents DockTrina, a novel protein docking method for modeling the 3D structures of nonsymmetrical triangular trimers. The method takes as input pair-wise contact predictions from a rigid body docking program. It then scans and scores all possible combinations of pairs of monomers using a very fast root mean square deviation test. Finally, it ranks the predictions using a scoring function which combines triples of pair-wise contact terms and a geometric clash penalty term. The overall approach takes less than 2 min per complex on a modern desktop computer. The method is tested and validated using a benchmark set of 220 bound and seven unbound protein trimer structures. DockTrina will be made available at http://nano-d.inrialpes.fr/software/docktrina. Copyright © 2013 Wiley Periodicals, Inc.

  14. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  15. Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex and Dynamic Conditions

    DTIC Science & Technology

    2015-07-14

    AFRL-OSR-VA-TR-2015-0202 Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex...Computational Modeling of Team Problem Solving for Decision Making Under Complex and Dynamic Conditions 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...functioning as they solve complex problems, and propose the means to improve the performance of teams, under changing or adversarial conditions. By

  16. A Thick-Restart Lanczos Algorithm with Polynomial Filtering for Hermitian Eigenvalue Problems

    DOE PAGES

    Li, Ruipeng; Xi, Yuanzhe; Vecharynski, Eugene; ...

    2016-08-16

    Polynomial filtering can provide a highly effective means of computing all eigenvalues of a real symmetric (or complex Hermitian) matrix that are located in a given interval, anywhere in the spectrum. This paper describes a technique for tackling this problem by combining a thick-restart version of the Lanczos algorithm with deflation ("locking'') and a new type of polynomial filter obtained from a least-squares technique. Furthermore, the resulting algorithm can be utilized in a “spectrum-slicing” approach whereby a very large number of eigenvalues and associated eigenvectors of the matrix are computed by extracting eigenpairs located in different subintervals independently from onemore » another.« less

  17. Substance use in pregnancy.

    PubMed

    Slater, Louise

    2015-01-01

    The increase in substance use which occurred in the 1980s was disproportionately large among women of reproductive age, so both the numbers of women who use drugs and the duration of drug use have increased (Hepburn 2004). While drug use occurs throughout society, the type and pattern of drug use that is associated with medical and social problems is closely associated with socio-economic deprivation. Socio-economic deprivation is in turn associated with unhealthy lifestyles and behaviours such as smoking and poor diet. Deprivation, associated lifestyles and substance use adversely affect the health of mother and baby, so the effects are cumulative. Consequently women with problem drug and/or alcohol use have potentially complex pregnancies (Hepburn 2004).

  18. Revisiting the structures of several antibiotics bound to the bacterial ribosome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bulkley, David; Innis, C. Axel; Blaha, Gregor

    2010-10-08

    The increasing prevalence of antibiotic-resistant pathogens reinforces the need for structures of antibiotic-ribosome complexes that are accurate enough to enable the rational design of novel ribosome-targeting therapeutics. Structures of many antibiotics in complex with both archaeal and eubacterial ribosomes have been determined, yet discrepancies between several of these models have raised the question of whether these differences arise from species-specific variations or from experimental problems. Our structure of chloramphenicol in complex with the 70S ribosome from Thermus thermophilus suggests a model for chloramphenicol bound to the large subunit of the bacterial ribosome that is radically different from the prevailing model.more » Further, our structures of the macrolide antibiotics erythromycin and azithromycin in complex with a bacterial ribosome are indistinguishable from those determined of complexes with the 50S subunit of Haloarcula marismortui, but differ significantly from the models that have been published for 50S subunit complexes of the eubacterium Deinococcus radiodurans. Our structure of the antibiotic telithromycin bound to the T. thermophilus ribosome reveals a lactone ring with a conformation similar to that observed in the H. marismortui and D. radiodurans complexes. However, the alkyl-aryl moiety is oriented differently in all three organisms, and the contacts observed with the T. thermophilus ribosome are consistent with biochemical studies performed on the Escherichia coli ribosome. Thus, our results support a mode of macrolide binding that is largely conserved across species, suggesting that the quality and interpretation of electron density, rather than species specificity, may be responsible for many of the discrepancies between the models.« less

  19. Revisiting the Structures of Several Antibiotics Bound to the Bacterial Ribosome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D Bulkley; C Innis; G Blaha

    2011-12-31

    The increasing prevalence of antibiotic-resistant pathogens reinforces the need for structures of antibiotic-ribosome complexes that are accurate enough to enable the rational design of novel ribosome-targeting therapeutics. Structures of many antibiotics in complex with both archaeal and eubacterial ribosomes have been determined, yet discrepancies between several of these models have raised the question of whether these differences arise from species-specific variations or from experimental problems. Our structure of chloramphenicol in complex with the 70S ribosome from Thermus thermophilus suggests a model for chloramphenicol bound to the large subunit of the bacterial ribosome that is radically different from the prevailing model.more » Further, our structures of the macrolide antibiotics erythromycin and azithromycin in complex with a bacterial ribosome are indistinguishable from those determined of complexes with the 50S subunit of Haloarcula marismortui, but differ significantly from the models that have been published for 50S subunit complexes of the eubacterium Deinococcus radiodurans. Our structure of the antibiotic telithromycin bound to the T. thermophilus ribosome reveals a lactone ring with a conformation similar to that observed in the H. marismortui and D. radiodurans complexes. However, the alkyl-aryl moiety is oriented differently in all three organisms, and the contacts observed with the T. thermophilus ribosome are consistent with biochemical studies performed on the Escherichia coli ribosome. Thus, our results support a mode of macrolide binding that is largely conserved across species, suggesting that the quality and interpretation of electron density, rather than species specificity, may be responsible for many of the discrepancies between the models.« less

  20. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.

  1. On estimating the phase of periodic waveform in additive Gaussian noise, part 2

    NASA Astrophysics Data System (ADS)

    Rauch, L. L.

    1984-11-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  2. On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2

    NASA Technical Reports Server (NTRS)

    Rauch, L. L.

    1984-01-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  3. Economic aspects of rare diseases.

    PubMed

    Borski, Krzysztof

    2015-01-01

    Economic problems related to the prevention, diagnosis and treatment of rare diseases are presented paying particular attention to the costs of financing treatment, including the issue of its refund, which is a fundamental and difficult to solve economic problem of the health care system. Rare diseases, despite the low frequency of occurrence, together cover a large group of diseases being a serious medical, social and economic problem. The adoption of Polish National Plan for Rare Diseases resulting from the recommendations of the Council of the European Union, the extension of institutional activities related to the area of public health and social initiatives seeking innovative solutions to create a model of social support for patients and their families, with very high complexity of the issues regarding rare diseases, results in the need for a coherent, comprehensive, system operations and adoption of comprehensive solutions.

  4. "Rebuilding our community": hearing silenced voices on Aboriginal youth suicide.

    PubMed

    Walls, Melissa L; Hautala, Dane; Hurley, Jenna

    2014-02-01

    This paper brings forth the voices of adult Aboriginal First Nations community members who gathered in focus groups to discuss the problem of youth suicide on their reserves. Our approach emphasizes multilevel (e.g., individual, family, and broader ecological systems) factors viewed by participants as relevant to youth suicide. Wheaton's conceptualization of stressors and Evans-Campbell's multilevel classification of the impacts of historical trauma are used as theoretical and analytic guides. Thematic analysis of qualitative data transcripts revealed a highly complex intersection of stressors, traumas, and social problems seen by community members as underlying mechanisms influencing heightened levels of Aboriginal youth suicidality. Our multilevel coding approach revealed that suicidal behaviors were described by community members largely as a problem with deep historical and contemporary structural roots, as opposed to being viewed as individualized pathology.

  5. An outer approximation method for the road network design problem

    PubMed Central

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well. PMID:29590111

  6. An outer approximation method for the road network design problem.

    PubMed

    Asadi Bagloee, Saeed; Sarvi, Majid

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well.

  7. A bifurcation giving birth to order in an impulsively driven complex system

    NASA Astrophysics Data System (ADS)

    Seshadri, Akshay; Sujith, R. I.

    2016-08-01

    Nonlinear oscillations lie at the heart of numerous complex systems. Impulsive forcing arises naturally in many scenarios, and we endeavour to study nonlinear oscillators subject to such forcing. We model these kicked oscillatory systems as a piecewise smooth dynamical system, whereby their dynamics can be investigated. We investigate the problem of pattern formation in a turbulent combustion system and apply this formalism with the aim of explaining the observed dynamics. We identify that the transition of this system from low amplitude chaotic oscillations to large amplitude periodic oscillations is the result of a discontinuity induced bifurcation. Further, we provide an explanation for the occurrence of intermittent oscillations in the system.

  8. Coupled land surface/hydrologic/atmospheric models

    NASA Technical Reports Server (NTRS)

    Pielke, Roger; Steyaert, Lou; Arritt, Ray; Lahtakia, Mercedes; Smith, Chris; Ziegler, Conrad; Soong, Su Tzai; Avissar, Roni; Wetzel, Peter; Sellers, Piers

    1993-01-01

    The topics covered include the following: prototype land cover characteristics data base for the conterminous United States; surface evapotranspiration effects on cumulus convection and implications for mesoscale models; the use of complex treatment of surface hydrology and thermodynamics within a mesoscale model and some related issues; initialization of soil-water content for regional-scale atmospheric prediction models; impact of surface properties on dryline and MCS evolution; a numerical simulation of heavy precipitation over the complex topography of California; representing mesoscale fluxes induced by landscape discontinuities in global climate models; emphasizing the role of subgrid-scale heterogeneity in surface-air interaction; and problems with modeling and measuring biosphere-atmosphere exchanges of energy, water, and carbon on large scales.

  9. A bifurcation giving birth to order in an impulsively driven complex system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadri, Akshay, E-mail: akshayseshadri@gmail.com; Sujith, R. I., E-mail: sujith@iitm.ac.in

    Nonlinear oscillations lie at the heart of numerous complex systems. Impulsive forcing arises naturally in many scenarios, and we endeavour to study nonlinear oscillators subject to such forcing. We model these kicked oscillatory systems as a piecewise smooth dynamical system, whereby their dynamics can be investigated. We investigate the problem of pattern formation in a turbulent combustion system and apply this formalism with the aim of explaining the observed dynamics. We identify that the transition of this system from low amplitude chaotic oscillations to large amplitude periodic oscillations is the result of a discontinuity induced bifurcation. Further, we provide anmore » explanation for the occurrence of intermittent oscillations in the system.« less

  10. Virtual Control Policy for Binary Ordered Resources Petri Net Class.

    PubMed

    Rovetto, Carlos A; Concepción, Tomás J; Cano, Elia Esther

    2016-08-18

    Prevention and avoidance of deadlocks in sensor networks that use the wormhole routing algorithm is an active research domain. There are diverse control policies that will address this problem being our approach a new method. In this paper we present a virtual control policy for the new specialized Petri net subclass called Binary Ordered Resources Petri Net (BORPN). Essentially, it is an ordinary class constructed from various state machines that share unitary resources in a complex form, which allows branching and joining of processes. The reduced structure of this new class gives advantages that allow analysis of the entire system's behavior, which is a prohibitive task for large systems because of the complexity and routing algorithms.

  11. The MYO6 interactome reveals adaptor complexes coordinating early endosome and cytoskeletal dynamics.

    PubMed

    O'Loughlin, Thomas; Masters, Thomas A; Buss, Folma

    2018-04-01

    The intracellular functions of myosin motors requires a number of adaptor molecules, which control cargo attachment, but also fine-tune motor activity in time and space. These motor-adaptor-cargo interactions are often weak, transient or highly regulated. To overcome these problems, we use a proximity labelling-based proteomics strategy to map the interactome of the unique minus end-directed actin motor MYO6. Detailed biochemical and functional analysis identified several distinct MYO6-adaptor modules including two complexes containing RhoGEFs: the LIFT (LARG-Induced F-actin for Tethering) complex that controls endosome positioning and motility through RHO-driven actin polymerisation; and the DISP (DOCK7-Induced Septin disPlacement) complex, a novel regulator of the septin cytoskeleton. These complexes emphasise the role of MYO6 in coordinating endosome dynamics and cytoskeletal architecture. This study provides the first in vivo interactome of a myosin motor protein and highlights the power of this approach in uncovering dynamic and functionally diverse myosin motor complexes. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  12. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  13. Applications of fidelity measures to complex quantum systems

    PubMed Central

    2016-01-01

    We revisit fidelity as a measure for the stability and the complexity of the quantum motion of single-and many-body systems. Within the context of cold atoms, we present an overview of applications of two fidelities, which we call static and dynamical fidelity, respectively. The static fidelity applies to quantum problems which can be diagonalized since it is defined via the eigenfunctions. In particular, we show that the static fidelity is a highly effective practical detector of avoided crossings characterizing the complexity of the systems and their evolutions. The dynamical fidelity is defined via the time-dependent wave functions. Focusing on the quantum kicked rotor system, we highlight a few practical applications of fidelity measurements in order to better understand the large variety of dynamical regimes of this paradigm of a low-dimensional system with mixed regular–chaotic phase space. PMID:27140967

  14. Sharing the responsibility for driver distraction across road transport systems: a systems approach to the management of distracted driving.

    PubMed

    Young, Kristie L; Salmon, Paul M

    2015-01-01

    Distracted driving is acknowledged universally as a large and growing road safety problem. Compounding the problem is that distracted driving is a complex, multifaceted issue influenced by a multitude of factors, organisations and individuals. As such, management of the problem is not straightforward. Numerous countermeasures have been developed and implemented across the globe. The vast majority of these measures have derived from the traditional reductionist, driver-centric approach to distraction and have failed to fully reflect the complex mix of actors and components that give rise to drivers becoming distracted. An alternative approach that is gaining momentum in road safety is the systems approach, which considers all components of the system and their interactions as an integrated whole. In this paper, we review the current knowledge base on driver distraction and argue that the systems approach is not currently being realised in practice. Adopting a more holistic, systems approach to distracted driving will not only improve existing knowledge and interventions from the traditional approach, but will enhance our understanding and management of distraction by considering the complex relationships and interactions of the multiple actors and the myriad sources, enablers and interventions that make up the distracted driving system. It is only by recognising and understanding how all of the system components work together to enable distraction to occur, that we can start to work on solutions to help mitigate the occurrence and consequences of distracted driving. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Proper care for the dying: a critical public issue.

    PubMed Central

    Crispell, K R; Gomez, C F

    1987-01-01

    The ability of the medical profession to sustain life, or more appropriately, to prolong dying, in patients with terminal illness, creates a most complex and controversial situation for all involved: the patient, if mentally alert; the patient's family; and the medical care team including physicians, nurses and attendants. This situation is especially complex in large acute care hospitals where medical and nursing students, residents and house officers receive advanced medical training. A major problem, prolonging the dying of the terminally ill, with its medical, legal, ethical and economic complexities now confronts American society. The problem is particularly acute in teaching hospitals, in which one finds a disproportionate number of terminally ill patients. The ability to work at these questions as a community rather than as adversaries will determine much about the ability of the health care system to respect the dignity and autonomy of those who seek aid and comfort when faced with serious illness and impending death. Better communication between the physicians, health care providers, the lawyers and ethicists must be developed in order to solve these problems. Over the next ten years society and our elected representatives will be making very demanding decisions about the use of the health dollar. One possible way to prevent increasing costs is to reach significant agreement on the proper care of the dying. Proper care for the dying is being considered, discussed, and evaluated by very thoughtful people. It is not governments which should decide who is to live or who is to die. There is the serious problem of the 'slippery slope' to euthanasia by omission if cost containment becomes the major force in formulating policy on the proper care of the dying. PMID:3612698

  16. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  17. A Godunov-like point-centered essentially Lagrangian hydrodynamic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.

    We present an essentially Lagrangian hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedron meshes. The scheme reduces to a purely Lagrangian approach when the flow is linear or if the mesh size is equal to zero; as a result, we use the term essentially Lagrangian for the proposed approach. The motivation for developing a hydrodynamic method for tetrahedron meshes is because tetrahedron meshes have some advantages over other mesh topologies. Notable advantages include reduced complexity in generating conformal meshes, reduced complexity in mesh reconnection, and preserving tetrahedron cells with automatic mesh refinement. A challenge, however, is tetrahedron meshesmore » do not correctly deform with a lower order (i.e. piecewise constant) staggered-grid hydrodynamic scheme (SGH) or with a cell-centered hydrodynamic (CCH) scheme. The SGH and CCH approaches calculate the strain via the tetrahedron, which can cause artificial stiffness on large deformation problems. To resolve the stiffness problem, we adopt the point-centered hydrodynamic approach (PCH) and calculate the evolution of the flow via an integration path around the node. The PCH approach stores the conserved variables (mass, momentum, and total energy) at the node. The evolution equations for momentum and total energy are discretized using an edge-based finite element (FE) approach with linear basis functions. A multidirectional Riemann-like problem is introduced at the center of the tetrahedron to account for discontinuities in the flow such as a shock. Conservation is enforced at each tetrahedron center. The multidimensional Riemann-like problem used here is based on Lagrangian CCH work [8, 19, 37, 38, 44] and recent Lagrangian SGH work [33-35, 39, 45]. In addition, an approximate 1D Riemann problem is solved on each face of the nodal control volume to advect mass, momentum, and total energy. The 1D Riemann problem produces fluxes [18] that remove a volume error in the PCH discretization. A 2-stage Runge–Kutta method is used to evolve the solution in time. The details of the new hydrodynamic scheme are discussed; likewise, results from numerical test problems are presented.« less

  18. A Godunov-like point-centered essentially Lagrangian hydrodynamic approach

    DOE PAGES

    Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.; ...

    2014-10-28

    We present an essentially Lagrangian hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedron meshes. The scheme reduces to a purely Lagrangian approach when the flow is linear or if the mesh size is equal to zero; as a result, we use the term essentially Lagrangian for the proposed approach. The motivation for developing a hydrodynamic method for tetrahedron meshes is because tetrahedron meshes have some advantages over other mesh topologies. Notable advantages include reduced complexity in generating conformal meshes, reduced complexity in mesh reconnection, and preserving tetrahedron cells with automatic mesh refinement. A challenge, however, is tetrahedron meshesmore » do not correctly deform with a lower order (i.e. piecewise constant) staggered-grid hydrodynamic scheme (SGH) or with a cell-centered hydrodynamic (CCH) scheme. The SGH and CCH approaches calculate the strain via the tetrahedron, which can cause artificial stiffness on large deformation problems. To resolve the stiffness problem, we adopt the point-centered hydrodynamic approach (PCH) and calculate the evolution of the flow via an integration path around the node. The PCH approach stores the conserved variables (mass, momentum, and total energy) at the node. The evolution equations for momentum and total energy are discretized using an edge-based finite element (FE) approach with linear basis functions. A multidirectional Riemann-like problem is introduced at the center of the tetrahedron to account for discontinuities in the flow such as a shock. Conservation is enforced at each tetrahedron center. The multidimensional Riemann-like problem used here is based on Lagrangian CCH work [8, 19, 37, 38, 44] and recent Lagrangian SGH work [33-35, 39, 45]. In addition, an approximate 1D Riemann problem is solved on each face of the nodal control volume to advect mass, momentum, and total energy. The 1D Riemann problem produces fluxes [18] that remove a volume error in the PCH discretization. A 2-stage Runge–Kutta method is used to evolve the solution in time. The details of the new hydrodynamic scheme are discussed; likewise, results from numerical test problems are presented.« less

  19. Students' conceptual performance on synthesis physics problems with varying mathematical complexity

    NASA Astrophysics Data System (ADS)

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-06-01

    A body of research on physics problem solving has focused on single-concept problems. In this study we use "synthesis problems" that involve multiple concepts typically taught in different chapters. We use two types of synthesis problems, sequential and simultaneous synthesis tasks. Sequential problems require a consecutive application of fundamental principles, and simultaneous problems require a concurrent application of pertinent concepts. We explore students' conceptual performance when they solve quantitative synthesis problems with varying mathematical complexity. Conceptual performance refers to the identification, follow-up, and correct application of the pertinent concepts. Mathematical complexity is determined by the type and the number of equations to be manipulated concurrently due to the number of unknowns in each equation. Data were collected from written tasks and individual interviews administered to physics major students (N =179 ) enrolled in a second year mechanics course. The results indicate that mathematical complexity does not impact students' conceptual performance on the sequential tasks. In contrast, for the simultaneous problems, mathematical complexity negatively influences the students' conceptual performance. This difference may be explained by the students' familiarity with and confidence in particular concepts coupled with cognitive load associated with manipulating complex quantitative equations. Another explanation pertains to the type of synthesis problems, either sequential or simultaneous task. The students split the situation presented in the sequential synthesis tasks into segments but treated the situation in the simultaneous synthesis tasks as a single event.

  20. Large system change challenges: addressing complex critical issues in linked physical and social domains

    NASA Astrophysics Data System (ADS)

    Waddell, Steve; Cornell, Sarah; Hsueh, Joe; Ozer, Ceren; McLachlan, Milla; Birney, Anna

    2015-04-01

    Most action to address contemporary complex challenges, including the urgent issues of global sustainability, occurs piecemeal and without meaningful guidance from leading complex change knowledge and methods. The potential benefit of using such knowledge is greater efficacy of effort and investment. However, this knowledge and its associated tools and methods are under-utilized because understanding about them is low, fragmented between diverse knowledge traditions, and often requires shifts in mindsets and skills from expert-led to participant-based action. We have been engaged in diverse action-oriented research efforts in Large System Change for sustainability. For us, "large" systems can be characterized as large-scale systems - up to global - with many components, of many kinds (physical, biological, institutional, cultural/conceptual), operating at multiple levels, driven by multiple forces, and presenting major challenges for people involved. We see change of such systems as complex challenges, in contrast with simple or complicated problems, or chaotic situations. In other words, issues and sub-systems have unclear boundaries, interact with each other, and are often contradictory; dynamics are non-linear; issues are not "controllable", and "solutions" are "emergent" and often paradoxical. Since choices are opportunity-, power- and value-driven, these social, institutional and cultural factors need to be made explicit in any actionable theory of change. Our emerging network is sharing and building a knowledge base of experience, heuristics, and theories of change from multiple disciplines and practice domains. We will present our views on focal issues for the development of the field of large system change, which include processes of goal-setting and alignment; leverage of systemic transitions and transformation; and the role of choice in influencing critical change processes, when only some sub-systems or levels of the system behave in purposeful ways, while others are undeniably and unavoidably deterministic.

  1. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  2. Formative feedback and scaffolding for developing complex problem solving and modelling outcomes

    NASA Astrophysics Data System (ADS)

    Frank, Brian; Simper, Natalie; Kaupp, James

    2018-07-01

    This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.

  3. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert

    PubMed Central

    Schmidt, Henk G.; Rikers, Remy M. J. P.; Custers, Eugene J. F. M.; Splinter, Ted A. W.; van Saase, Jan L. C. M.

    2010-01-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices’ decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases. PMID:20354726

  4. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert.

    PubMed

    Mamede, Sílvia; Schmidt, Henk G; Rikers, Remy M J P; Custers, Eugene J F M; Splinter, Ted A W; van Saase, Jan L C M

    2010-11-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices' decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases.

  5. Preventing adolescent pregnancy and associated risks.

    PubMed Central

    Miller, R.

    1995-01-01

    Adolescent pregnancy is a complex and frustrating problem that exacts a large social and personal cost. This year approximately 40,000 Canadian teenagers will become pregnant. With proper prevention, this number could be reduced. Pregnant teenagers seem to be at increased risk for some obstetric complications and their children for some neonatal complications. Family physicians who see patients over the course of a lifetime are in a good position to prevent adolescent pregnancy and the associated complications. PMID:8520241

  6. Problems of Automation and Management Principles Information Flow in Manufacturing

    NASA Astrophysics Data System (ADS)

    Grigoryuk, E. N.; Bulkin, V. V.

    2017-07-01

    Automated control systems of technological processes are complex systems that are characterized by the presence of elements of the overall focus, the systemic nature of the implemented algorithms for the exchange and processing of information, as well as a large number of functional subsystems. The article gives examples of automatic control systems and automated control systems of technological processes held parallel between them by identifying strengths and weaknesses. Other proposed non-standard control system of technological process.

  7. Preparing new nurses with complexity science and problem-based learning.

    PubMed

    Hodges, Helen F

    2011-01-01

    Successful nurses function effectively with adaptability, improvability, and interconnectedness, and can see emerging and unpredictable complex problems. Preparing new nurses for complexity requires a significant change in prevalent but dated nursing education models for rising graduates. The science of complexity coupled with problem-based learning and peer review contributes a feasible framework for a constructivist learning environment to examine real-time systems data; explore uncertainty, inherent patterns, and ambiguity; and develop skills for unstructured problem solving. This article describes a pilot study of a problem-based learning strategy guided by principles of complexity science in a community clinical nursing course. Thirty-five senior nursing students participated during a 3-year period. Assessments included peer review, a final project paper, reflection, and a satisfaction survey. Results were higher than expected levels of student satisfaction, increased breadth and analysis of complex data, acknowledgment of community as complex adaptive systems, and overall higher level thinking skills than in previous years. 2011, SLACK Incorporated.

  8. Studying marine stratus with large eddy simulation

    NASA Technical Reports Server (NTRS)

    Moeng, Chin-Hoh

    1990-01-01

    Data sets from field experiments over the stratocumulus regime may include complications from larger scale variations, decoupled cloud layers, diurnal cycle, or entrainment instability, etc. On top of the already complicated turbulence-radiation-condensation processes within the cloud-topped boundary layer (CTBL), these complexities may sometimes make interpretation of the data sets difficult. To study these processes, a better understanding is needed of the basic processes involved in the prototype CTBL. For example, is cloud top radiative cooling the primary source of the turbulent kinetic energy (TKE) within the CTBL. Historically, laboratory measurements have played an important role in addressing the turbulence problems. The CTBL is a turbulent field which is probably impossible to generate in laboratories. Large eddy simulation (LES) is an alternative way of 'measuring' the turbulent structure under controlled environments, which allows the systematic examination of the basic physical processes involved. However, there are problems with the LES approach for the CTBL. The LES data need to be consistent with the observed data. The LES approach is discussed, and results are given which provide some insights into the simulated turbulent flow field. Problems with this approach for the CTBL and information from the FIRE experiment needed to justify the LES results are discussed.

  9. Some Thoughts Regarding Practical Quantum Computing

    NASA Astrophysics Data System (ADS)

    Ghoshal, Debabrata; Gomez, Richard; Lanzagorta, Marco; Uhlmann, Jeffrey

    2006-03-01

    Quantum computing has become an important area of research in computer science because of its potential to provide more efficient algorithmic solutions to certain problems than are possible with classical computing. The ability of performing parallel operations over an exponentially large computational space has proved to be the main advantage of the quantum computing model. In this regard, we are particularly interested in the potential applications of quantum computers to enhance real software systems of interest to the defense, industrial, scientific and financial communities. However, while much has been written in popular and scientific literature about the benefits of the quantum computational model, several of the problems associated to the practical implementation of real-life complex software systems in quantum computers are often ignored. In this presentation we will argue that practical quantum computation is not as straightforward as commonly advertised, even if the technological problems associated to the manufacturing and engineering of large-scale quantum registers were solved overnight. We will discuss some of the frequently overlooked difficulties that plague quantum computing in the areas of memories, I/O, addressing schemes, compilers, oracles, approximate information copying, logical debugging, error correction and fault-tolerant computing protocols.

  10. Guide to making time-lapse graphics using the facilities of the National Magnetic Fusion Energy Computing Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.K. Jr.

    1980-05-01

    The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone whomore » wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables.« less

  11. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  12. Addressing Complex Challenges through Adaptive Leadership: A Promising Approach to Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Nelson, Tenneisha; Squires, Vicki

    2017-01-01

    Organizations are faced with solving increasingly complex problems. Addressing these issues requires effective leadership that can facilitate a collaborative problem solving approach where multiple perspectives are leveraged. In this conceptual paper, we critique the effectiveness of earlier leadership models in tackling complex organizational…

  13. Benchmarking of London Dispersion-Accounting Density Functional Theory Methods on Very Large Molecular Complexes.

    PubMed

    Risthaus, Tobias; Grimme, Stefan

    2013-03-12

    A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.

  14. Intelligent systems engineering methodology

    NASA Technical Reports Server (NTRS)

    Fouse, Scott

    1990-01-01

    An added challenge for the designers of large scale systems such as Space Station Freedom is the appropriate incorporation of intelligent system technology (artificial intelligence, expert systems, knowledge-based systems, etc.) into their requirements and design. This presentation will describe a view of systems engineering which successfully addresses several aspects of this complex problem: design of large scale systems, design with requirements that are so complex they only completely unfold during the development of a baseline system and even then continue to evolve throughout the system's life cycle, design that involves the incorporation of new technologies, and design and development that takes place with many players in a distributed manner yet can be easily integrated to meet a single view of the requirements. The first generation of this methodology was developed and evolved jointly by ISX and the Lockheed Aeronautical Systems Company over the past five years on the Defense Advanced Research Projects Agency/Air Force Pilot's Associate Program, one of the largest, most complex, and most successful intelligent systems constructed to date. As the methodology has evolved it has also been applied successfully to a number of other projects. Some of the lessons learned from this experience may be applicable to Freedom.

  15. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    PubMed

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  16. Solving a real-world problem using an evolving heuristically driven schedule builder.

    PubMed

    Hart, E; Ross, P; Nelson, J

    1998-01-01

    This work addresses the real-life scheduling problem of a Scottish company that must produce daily schedules for the catching and transportation of large numbers of live chickens. The problem is complex and highly constrained. We show that it can be successfully solved by division into two subproblems and solving each using a separate genetic algorithm (GA). We address the problem of whether this produces locally optimal solutions and how to overcome this. We extend the traditional approach of evolving a "permutation + schedule builder" by concentrating on evolving the schedule builder itself. This results in a unique schedule builder being built for each daily scheduling problem, each individually tailored to deal with the particular features of that problem. This results in a robust, fast, and flexible system that can cope with most of the circumstances imaginable at the factory. We also compare the performance of a GA approach to several other evolutionary methods and show that population-based methods are superior to both hill-climbing and simulated annealing in the quality of solutions produced. Population-based methods also have the distinct advantage of producing multiple, equally fit solutions, which is of particular importance when considering the practical aspects of the problem.

  17. Organizational Influences on Interdisciplinary Interactions during Research and Design of Large-Scale Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria R.; Seifert, Colleen M.; Papalambros, Panos Y.

    2012-01-01

    The design of large-scale complex engineered systems (LaCES) such as an aircraft is inherently interdisciplinary. Multiple engineering disciplines, drawing from a team of hundreds to thousands of engineers and scientists, are woven together throughout the research, development, and systems engineering processes to realize one system. Though research and development (R&D) is typically focused in single disciplines, the interdependencies involved in LaCES require interdisciplinary R&D efforts. This study investigates the interdisciplinary interactions that take place during the R&D and early conceptual design phases in the design of LaCES. Our theoretical framework is informed by both engineering practices and social science research on complex organizations. This paper provides preliminary perspective on some of the organizational influences on interdisciplinary interactions based on organization theory (specifically sensemaking), data from a survey of LaCES experts, and the authors experience in the research and design. The analysis reveals couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their significant benefit to the engineered system, such as innovation and problem mitigation. Substantial obstacles to interdisciplinarity are uncovered beyond engineering that include communication and organizational challenges. Addressing these challenges may ultimately foster greater efficiencies in the design and development of LaCES and improved system performance by assisting with the collective integration of interdependent knowledge bases early in the R&D effort. This research suggests that organizational and human dynamics heavily influence and even constrain the engineering effort for large-scale complex systems.

  18. Big Data Analysis of Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-11-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.

  19. Portrait of an Enzyme, a Complete Structural Analysis of a Multimodular beta-N-Acetylglucosaminidase from Clostridium perfringens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ficko-Blean, E.; Gregg, K; Adams, J

    2009-01-01

    Common features of the extracellular carbohydrate-active virulence factors involved in host-pathogen interactions are their large sizes and modular complexities. This has made them recalcitrant to structural analysis, and therefore our understanding of the significance of modularity in these important proteins is lagging. Clostridium perfringens is a prevalent human pathogen that harbors a wide array of large, extracellular carbohydrate-active enzymes and is an excellent and relevant model system to approach this problem. Here we describe the complete structure of C. perfringens GH84C (NagJ), a 1001-amino acid multimodular homolog of the C. perfringens ?-toxin, which was determined using a combination of smallmore » angle x-ray scattering and x-ray crystallography. The resulting structure reveals unprecedented insight into how catalysis, carbohydrate-specific adherence, and the formation of molecular complexes with other enzymes via an ultra-tight protein-protein interaction are spatially coordinated in an enzyme involved in a host-pathogen interaction.« less

  20. Collaborative mining and interpretation of large-scale data for biomedical research insights.

    PubMed

    Tsiliki, Georgia; Karacapilidis, Nikos; Christodoulou, Spyros; Tzagarakis, Manolis

    2014-01-01

    Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence.

  1. Collaborative Mining and Interpretation of Large-Scale Data for Biomedical Research Insights

    PubMed Central

    Tsiliki, Georgia; Karacapilidis, Nikos; Christodoulou, Spyros; Tzagarakis, Manolis

    2014-01-01

    Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence. PMID:25268270

  2. A general path for large-scale solubilization of cellular proteins: From membrane receptors to multiprotein complexes

    PubMed Central

    Pullara, Filippo; Guerrero-Santoro, Jennifer; Calero, Monica; Zhang, Qiangmin; Peng, Ye; Spåhr, Henrik; Kornberg, Guy L.; Cusimano, Antonella; Stevenson, Hilary P.; Santamaria-Suarez, Hugo; Reynolds, Shelley L.; Brown, Ian S.; Monga, Satdarshan P.S.; Van Houten, Bennett; Rapić-Otrin, Vesna; Calero, Guillermo; Levine, Arthur S.

    2014-01-01

    Expression of recombinant proteins in bacterial or eukaryotic systems often results in aggregation rendering them unavailable for biochemical or structural studies. Protein aggregation is a costly problem for biomedical research. It forces research laboratories and the biomedical industry to search for alternative, more soluble, non-human proteins and limits the number of potential “druggable” targets. In this study we present a highly reproducible protocol that introduces the systematic use of an extensive number of detergents to solubilize aggregated proteins expressed in bacterial and eukaryotic systems. We validate the usefulness of this protocol by solubilizing traditionally difficult human protein targets to milligram quantities and confirm their biological activity. We use this method to solubilize monomeric or multimeric components of multi-protein complexes and demonstrate its efficacy to reconstitute large cellular machines. This protocol works equally well on cytosolic, nuclear and membrane proteins and can be easily adapted to a high throughput format. PMID:23137940

  3. Unfolding of a ClC chloride transporter retains memory of its evolutionary history.

    PubMed

    Min, Duyoung; Jefferson, Robert E; Qi, Yifei; Wang, Jing Yang; Arbing, Mark A; Im, Wonpil; Bowie, James U

    2018-05-01

    ClC chloride channels and transporters are important for chloride homeostasis in species from bacteria to human. Mutations in ClC proteins cause genetically inherited diseases, some of which are likely to involve folding defects. The ClC proteins present a challenging and unusual biological folding problem because they are large membrane proteins possessing a complex architecture, with many reentrant helices that go only partway through membrane and loop back out. Here we were able to examine the unfolding of the Escherichia coli ClC transporter, ClC-ec1, using single-molecule forced unfolding methods. We found that the protein could be separated into two stable halves that unfolded independently. The independence of the two domains is consistent with an evolutionary model in which the two halves arose from independently folding subunits that later fused together. Maintaining smaller folding domains of lesser complexity within large membrane proteins may be an advantageous strategy to avoid misfolding traps.

  4. Socio-Technical Perspective on Interdisciplinary Interactions During the Development of Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria R.; Daly, Shanna; Baker, Wayne; Papalambros, panos; Seifert, Colleen

    2013-01-01

    This study investigates interdisciplinary interactions that take place during the research, development, and early conceptual design phases in the design of large-scale complex engineered systems (LaCES) such as aerospace vehicles. These interactions, that take place throughout a large engineering development organization, become the initial conditions of the systems engineering process that ultimately leads to the development of a viable system. This paper summarizes some of the challenges and opportunities regarding social and organizational issues that emerged from a qualitative study using ethnographic and survey data. The analysis reveals several socio-technical couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their benefits to the engineered system as well as substantial challenges in interdisciplinary interactions. Noted benefits included enhanced knowledge and problem mitigation and noted obstacles centered on organizational and human dynamics. Findings suggest that addressing the social challenges may be a critical need in enabling interdisciplinary interactions

  5. Performance assessment and optimisation of a large information system by combined customer relationship management and resilience engineering: a mathematical programming approach

    NASA Astrophysics Data System (ADS)

    Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.

    2017-10-01

    ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.

  6. The Retrospective Iterated Analysis Scheme for Nonlinear Chaotic Dynamics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2002-01-01

    Atmospheric data assimilation is the name scientists give to the techniques of blending atmospheric observations with atmospheric model results to obtain an accurate idea of what the atmosphere looks like at any given time. Because two pieces of information are used, observations and model results, the outcomes of data assimilation procedure should be better than what one would get by using one of these two pieces of information alone. There is a number of different mathematical techniques that fall under the data assimilation jargon. In theory most these techniques accomplish about the same thing. In practice, however, slight differences in the approaches amount to faster algorithms in some cases, more economical algorithms in other cases, and even give better overall results in yet some other cases because of practical uncertainties not accounted for by theory. Therefore, the key is to find the most adequate data assimilation procedure for the problem in hand. In our Data Assimilation group we have been doing extensive research to try and find just such data assimilation procedure. One promising possibility is what we call retrospective iterated analysis (RIA) scheme. This procedure has recently been implemented and studied in the context of a very large data assimilation system built to help predict and study weather and climate. Although the results from that study suggest that the RIA scheme produces quite reasonable results, a complete evaluation of the scheme is very difficult due to the complexity of that problem. The present work steps back a little bit and studies the behavior of the RIA scheme in the context of a small problem. The problem is small enough to allow full assessment of the quality of the RIA scheme, but it still has some of the complexity found in nature, namely, its chaotic-type behavior. We find that the RIA performs very well for this small but still complex problem which is a result that seconds the results of our early studies.

  7. Analytic Theory and Control of the Motion of Spinning Rigid Bodies

    NASA Technical Reports Server (NTRS)

    Tsiotras, Panagiotis

    1993-01-01

    Numerical simulations are often resorted to, in order to understand the attitude response and control characteristics of a rigid body. However, this approach in performing sensitivity and/or error analyses may be prohibitively expensive and time consuming, especially when a large number of problem parameters are involved. Thus, there is an important role for analytical models in obtaining an understanding of the complex dynamical behavior. In this dissertation, new analytic solutions are derived for the complete attitude motion of spinning rigid bodies, under minimal assumptions. Hence, we obtain the most general solutions reported in the literature so far. Specifically, large external torques and large asymmetries are included in the problem statement. Moreover, problems involving large angular excursions are treated in detail. A new tractable formulation of the kinematics is introduced which proves to be extremely helpful in the search for analytic solutions of the attitude history of such kinds of problems. The main utility of the new formulation becomes apparent however, when searching for feedback control laws for stabilization and/or reorientation of spinning spacecraft. This is an inherently nonlinear problem, where standard linear control techniques fail. We derive a class of control laws for spin axis stabilization of symmetric spacecraft using only two pairs of gas jet actuators. Practically, this could correspond to a spacecraft operating in failure mode, for example. Theoretically, it is also an important control problem which, because of its difficulty, has received little, if any, attention in the literature. The proposed control laws are especially simple and elegant. A feedback control law that achieves arbitrary reorientation of the spacecraft is also derived, using ideas from invariant manifold theory. The significance of this research is twofold. First, it provides a deeper understanding of the fundamental behavior of rigid bodies subject to body-fixed torques. Assessment of the analytic solutions reveals that they are very accurate; for symmetric bodies the solutions of Euler's equations of motion are, in fact, exact. Second, the results of this research have a fundamental impact on practical scientific and mechanical applications in terms of the analysis and control of all finite-sized rigid bodies ranging from nanomachines to very large bodies, both man made and natural. After all, Euler's equations of motion apply to all physical bodies, barring only the extreme limits of quantum mechanics and relativity.

  8. Improved teaching-learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Buddala, Raviteja; Mahapatra, Siba Sankar

    2017-11-01

    Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.

  9. Digital Sequences and a Time Reversal-Based Impact Region Imaging and Localization Method

    PubMed Central

    Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Qian, Weifeng

    2013-01-01

    To reduce time and cost of damage inspection, on-line impact monitoring of aircraft composite structures is needed. A digital monitor based on an array of piezoelectric transducers (PZTs) is developed to record the impact region of impacts on-line. It is small in size, lightweight and has low power consumption, but there are two problems with the impact alarm region localization method of the digital monitor at the current stage. The first one is that the accuracy rate of the impact alarm region localization is low, especially on complex composite structures. The second problem is that the area of impact alarm region is large when a large scale structure is monitored and the number of PZTs is limited which increases the time and cost of damage inspections. To solve the two problems, an impact alarm region imaging and localization method based on digital sequences and time reversal is proposed. In this method, the frequency band of impact response signals is estimated based on the digital sequences first. Then, characteristic signals of impact response signals are constructed by sinusoidal modulation signals. Finally, the phase synthesis time reversal impact imaging method is adopted to obtain the impact region image. Depending on the image, an error ellipse is generated to give out the final impact alarm region. A validation experiment is implemented on a complex composite wing box of a real aircraft. The validation results show that the accuracy rate of impact alarm region localization is approximately 100%. The area of impact alarm region can be reduced and the number of PZTs needed to cover the same impact monitoring region is reduced by more than a half. PMID:24084123

  10. Model Order Reduction Algorithm for Estimating the Absorption Spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.

    The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less

  11. A Series of MATLAB Learning Modules to Enhance Numerical Competency in Applied Marine Sciences

    NASA Astrophysics Data System (ADS)

    Fischer, A. M.; Lucieer, V.; Burke, C.

    2016-12-01

    Enhanced numerical competency to navigate the massive data landscapes are critical skills students need to effectively explore, analyse and visualize complex patterns in high-dimensional data for addressing the complexity of many of the world's problems. This is especially the case for interdisciplinary, undergraduate applied marine science programs, where students are required to demonstrate competency in methods and ideas across multiple disciplines. In response to this challenge, we have developed a series of repository-based data exploration, analysis and visualization modules in MATLAB for integration across various attending and online classes within the University of Tasmania. The primary focus of these modules is to teach students to collect, aggregate and interpret data from large on-line marine scientific data repositories to, 1) gain technical skills in discovering, accessing, managing and visualising large, numerous data sources, 2) interpret, analyse and design approaches to visualise these data, and 3) to address, through numerical approaches, complex, real-world problems, that the traditional scientific methods cannot address. All modules, implemented through a MATLAB live script, include a short recorded lecture to introduce the topic, a handout that gives an overview of the activities, an instructor's manual with a detailed methodology and discussion points, a student assessment (quiz and level-specific challenge task), and a survey. The marine science themes addressed through these modules include biodiversity, habitat mapping, algal blooms and sea surface temperature change and utilize a series of marine science and oceanographic data portals. Through these modules students, with minimal experience in MATLAB or numerical methods are introduced to array indexing, concatenation, sorting, and reshaping, principal component analysis, spectral analysis and unsupervised classification within the context of oceanographic processes, marine geology and marine community ecology.

  12. Learning to see, seeing to learn: visual aspects of sensemaking

    NASA Astrophysics Data System (ADS)

    Russell, Daniel M.

    2003-06-01

    When one says "I see," what is usually meant is "I understand." But what does it mean to create a sense of understanding a large, complex, problem, one with many interlocking pieces, sometimes ill-fitting data and the occasional bit of contradictory information? The traditional computer science perspective on helping people towards understanding is to provide an armamentarium of tools and techniques - databases, query tools and a variety of graphing methods. As a field, we have an overly simple perspective on what it means to grapple with real information. In practice, people who try to make sense of some thing (say, the life sciences, the Middle East, the large scale structure of the universe, their taxes) are faced with a complex collection of information, some in easy-to-digest structured forms, but with many relevant parts scattered hither and yon, in forms and shapes too difficult to manage. To create an understanding, we find that people create representations of complex information. Yet using representations relies on fairly sophisticated perceptual practices. These practices are in no way preordained, but subject to the kinds of perceptual and cognitive phenomena we see in every day life. In order to understand our information environments, we need to learn to perceive these perceptual elements, and understand when they do, and do not, work to our advantage. A more powerful approach to the problem of supporting realistic sensemaking practice is to design information environments that accommodate both the world"s information realities and people"s cognitive characteristics. This paper argues that visual aspects of representation use often dominate sensemaking behavior, and illustrates this by showing three sensemaking tools we have built that take advantage of this property.

  13. Radio Galaxy Zoo: Machine learning for radio source host galaxy cross-identification

    NASA Astrophysics Data System (ADS)

    Alger, M. J.; Banfield, J. K.; Ong, C. S.; Rudnick, L.; Wong, O. I.; Wolf, C.; Andernach, H.; Norris, R. P.; Shabala, S. S.

    2018-05-01

    We consider the problem of determining the host galaxies of radio sources by cross-identification. This has traditionally been done manually, which will be intractable for wide-area radio surveys like the Evolutionary Map of the Universe (EMU). Automated cross-identification will be critical for these future surveys, and machine learning may provide the tools to develop such methods. We apply a standard approach from computer vision to cross-identification, introducing one possible way of automating this problem, and explore the pros and cons of this approach. We apply our method to the 1.4 GHz Australian Telescope Large Area Survey (ATLAS) observations of the Chandra Deep Field South (CDFS) and the ESO Large Area ISO Survey South 1 (ELAIS-S1) fields by cross-identifying them with the Spitzer Wide-area Infrared Extragalactic (SWIRE) survey. We train our method with two sets of data: expert cross-identifications of CDFS from the initial ATLAS data release and crowdsourced cross-identifications of CDFS from Radio Galaxy Zoo. We found that a simple strategy of cross-identifying a radio component with the nearest galaxy performs comparably to our more complex methods, though our estimated best-case performance is near 100 per cent. ATLAS contains 87 complex radio sources that have been cross-identified by experts, so there are not enough complex examples to learn how to cross-identify them accurately. Much larger datasets are therefore required for training methods like ours. We also show that training our method on Radio Galaxy Zoo cross-identifications gives comparable results to training on expert cross-identifications, demonstrating the value of crowdsourced training data.

  14. The Bright Side of Being Blue: Depression as an Adaptation for Analyzing Complex Problems

    ERIC Educational Resources Information Center

    Andrews, Paul W.; Thomson, J. Anderson, Jr.

    2009-01-01

    Depression is the primary emotional condition for which help is sought. Depressed people often report persistent rumination, which involves analysis, and complex social problems in their lives. Analysis is often a useful approach for solving complex problems, but it requires slow, sustained processing, so disruption would interfere with problem…

  15. Meshless Method for Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow problems. To solve this discontinuity problem, this research study deals with the implementation of a conservative meshless method and its applications in computational fluid dynamics (CFD). One of the most common types of collocating meshless method the RBF-DQ, is used to approximate the spatial derivatives. The issue with meshless methods when dealing with highly convective cases is that they cannot distinguish the influence of fluid flow from upstream or downstream and some methodology is needed to make the scheme stable. Therefore, an upwinding scheme similar to one used in the finite volume method is added to capture steep gradient or shocks. This scheme creates a flexible algorithm within which a wide range of numerical flux schemes, such as those commonly used in the finite volume method, can be employed. In addition, a blended RBF is used to decrease the dissipation ensuing from the use of a low shape parameter. All of these steps are formulated for the Euler equation and a series of test problems used to confirm convergence of the algorithm. The present scheme was first employed on several incompressible benchmarks to validate the framework. The application of this algorithm is illustrated by solving a set of incompressible Navier-Stokes problems. Results from the compressible problem are compared with the exact solution for the flow over a ramp and compared with solutions of finite volume discretization and the discontinuous Galerkin method, both requiring a mesh. The applicability of the algorithm and its robustness are shown to be applied to complex problems.

  16. Towards the Rational Design of MRI Contrast Agents: Electron Spin Relaxation Is Largely Unaffected by the Coordination Geometry of Gadolinium(III)–DOTA-Type Complexes

    PubMed Central

    Bean, Jonathan F.; Clarkson, Robert B.; Helm, Lothar; Moriggi, Loïck; Sherry, A. Dean

    2009-01-01

    Electron-spin relaxation is one of the determining factors in the efficacy of MRI contrast agents. Of all the parameters involved in determining relaxivity it remains the least well understood, particularly as it relates to the structure of the complex. One of the reasons for the poor understanding of electron-spin relaxation is that it is closely related to the ligand-field parameters of the Gd3+ ion that forms the basis of MRI contrast agents and these complexes generally exhibit a structural isomerism that inherently complicates the study of electron spin relaxation. We have recently shown that two DOTA-type ligands could be synthesised that, when coordinated to Gd3+, would adopt well defined coordination geometries and are not subject to the problems of intramolecular motion of other complexes. The EPR properties of these two chelates were studied and the results examined with theory to probe their electron-spin relaxation properties. PMID:18283704

  17. Size does Matter

    NASA Astrophysics Data System (ADS)

    Vespignani, Alessandro

    From schools of fish and flocks of birds, to digital networks and self-organizing biopolymers, our understanding of spontaneously emergent phenomena, self-organization, and critical behavior is in large part due to complex systems science. The complex systems approach is indeed a very powerful conceptual framework to shed light on the link between the microscopic dynamical evolution of the basic elements of the system and the emergence of oscopic phenomena; often providing evidence for mathematical principles that go beyond the particulars of the individual system, thus hinting to general modeling principles. By killing the myth of the ant queen and shifting the focus on the dynamical interaction across the elements of the systems, complex systems science has ushered our way into the conceptual understanding of many phenomena at the core of major scientific and social challenges such as the emergence of consensus, social opinion dynamics, conflicts and cooperation, contagion phenomena. For many years though, these complex systems approaches to real-world problems were often suffering from being oversimplified and not grounded on actual data...

  18. Weighted SGD for ℓ p Regression with Randomized Preconditioning.

    PubMed

    Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W

    2016-01-01

    In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems-e.g., ℓ 2 and ℓ 1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓ p regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓ p solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ 1 regression with size n by d , pwSGD returns an approximate solution with ε relative error in the objective value in (log n ·nnz( A )+poly( d )/ ε 2 ) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ 2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in (log n ·nnz( A )+poly( d ) log(1/ ε )/ ε ) time. We show that for unconstrained ℓ 2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε , high dimension n and low dimension d satisfy d ≥ 1/ ε and n ≥ d 2 / ε . We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10 -3 , more quickly.

  19. Large-eddy simulation of a backward facing step flow using a least-squares spectral element method

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Mittal, Rajat

    1996-01-01

    We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.

  20. Saving SALT: repairs to the spherical aberration corrector of the Southern African Large Telescope (SALT)

    NASA Astrophysics Data System (ADS)

    O'Donoghue, Darragh E.; O'Connor, James; Crause, Lisa A.; Strumpfer, Francois; Strydom, Ockert J.; Brink, Janus D.; Sass, Craig; Wiid, Eben; Atad-Ettedgui, Eli

    2010-07-01

    The construction of the Southern African Large Telescope (SALT) was largely completed by the end of 2005. At the beginning of 2006, it was realized that the telescope's image quality suffered from optical aberrations, chiefly a focus gradient across the focal plane, but also accompanied by astigmatism and higher order aberrations. In the previous conference in this series, a paper was presented describing the optical system engineering investigation which had been conducted to diagnose the problem. This investigation exonerated the primary mirror as the cause, as well as the science instruments, and was isolated to the interface between the telescope and a major optical sub-system, the spherical aberration corrector (SAC). This is a complex sub-system of four aspheric mirrors which corrects the spherical aberration of the 11-m primary mirror. In the last two years, a solution to this problem was developed which involved removing the SAC from the telescope, installing a modification of the SAC/telescope interface, re-aligning and testing the four SAC mirrors and re-installation on the telescope. This paper describes the plan, discusses the details and shows progress to date and the current status.

Top