The Bright Side of Being Blue: Depression as an Adaptation for Analyzing Complex Problems
ERIC Educational Resources Information Center
Andrews, Paul W.; Thomson, J. Anderson, Jr.
2009-01-01
Depression is the primary emotional condition for which help is sought. Depressed people often report persistent rumination, which involves analysis, and complex social problems in their lives. Analysis is often a useful approach for solving complex problems, but it requires slow, sustained processing, so disruption would interfere with problem…
Implementation of Complexity Analyzing Based on Additional Effect
NASA Astrophysics Data System (ADS)
Zhang, Peng; Li, Na; Liang, Yanhong; Liu, Fang
According to the Complexity Theory, there is complexity in the system when the functional requirement is not be satisfied. There are several study performances for Complexity Theory based on Axiomatic Design. However, they focus on reducing the complexity in their study and no one focus on method of analyzing the complexity in the system. Therefore, this paper put forth a method of analyzing the complexity which is sought to make up the deficiency of the researches. In order to discussing the method of analyzing the complexity based on additional effect, this paper put forth two concepts which are ideal effect and additional effect. The method of analyzing complexity based on additional effect combines Complexity Theory with Theory of Inventive Problem Solving (TRIZ). It is helpful for designers to analyze the complexity by using additional effect. A case study shows the application of the process.
Technical Development and Application of Soft Computing in Agricultural and Biological Engineering
USDA-ARS?s Scientific Manuscript database
Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...
Development of Soft Computing and Applications in Agricultural and Biological Engineering
USDA-ARS?s Scientific Manuscript database
Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...
Decision Analysis for Environmental Problems
Environmental management problems are often complex and uncertain. A formal process with proper guidance is needed to understand the issues, identify sources of disagreement, and analyze the major uncertainties in environmental problems. This course will present a process that fo...
Problem-solving tools for analyzing system problems. The affinity map and the relationship diagram.
Lepley, C J
1998-12-01
The author describes how to use two management tools, an affinity map and a relationship diagram, to define and analyze aspects of a complex problem in a system. The affinity map identifies the key influencing elements of the problem, whereas the relationship diagram helps to identify the area that is the most important element of the issue. Managers can use the tools to draw a map of problem drivers, graphically display the drivers in a diagram, and use the diagram to develop a cause-and-effect relationship.
Application of NASA management approach to solve complex problems on earth
NASA Technical Reports Server (NTRS)
Potate, J. S.
1972-01-01
The application of NASA management approach to solving complex problems on earth is discussed. The management of the Apollo program is presented as an example of effective management techniques. Four key elements of effective management are analyzed. Photographs of the Cape Kennedy launch sites and supporting equipment are included to support the discussions.
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno
2013-01-01
Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…
NASA Technical Reports Server (NTRS)
Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.
1984-01-01
A numerical procedure is presented for analyzing a wide variety of heat conduction problems in multilayered bodies having complex geometry. The method is based on a finite difference solution of the heat conduction equation using a body fitted coordinate system transformation. Solution techniques are described for steady and transient problems with and without internal energy generation. Results are found to compare favorably with several well known solutions.
How Students Circumvent Problem-Solving Strategies that Require Greater Cognitive Complexity.
ERIC Educational Resources Information Center
Niaz, Mansoor
1996-01-01
Analyzes the great diversity in problem-solving strategies used by students in solving a chemistry problem and discusses the relationship between these variables and different cognitive variables. Concludes that students try to circumvent certain problem-solving strategies by adapting flexible and stylistic innovations that render the cognitive…
Structural qualia: a solution to the hard problem of consciousness.
Loorits, Kristjan
2014-01-01
The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has) something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved.
Structural qualia: a solution to the hard problem of consciousness
Loorits, Kristjan
2014-01-01
The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has) something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved. PMID:24672510
Conceptual and Developmental Analysis of Mental Models: An Example with Complex Change Problems.
ERIC Educational Resources Information Center
Poirier, Louise
Defining better implicit models of children's actions in a series of situations is of paramount importance to understanding how knowledge is constructed. The objective of this study was to analyze the implicit mental models used by children in complex change problems to understand the stability of the models and their evolution with the child's…
Solving the Problem of Linear Viscoelasticity for Piecewise-Homogeneous Anisotropic Plates
NASA Astrophysics Data System (ADS)
Kaloerov, S. A.; Koshkin, A. A.
2017-11-01
An approximate method for solving the problem of linear viscoelasticity for thin anisotropic plates subject to transverse bending is proposed. The method of small parameter is used to reduce the problem to a sequence of boundary problems of applied theory of bending of plates solved using complex potentials. The general form of complex potentials in approximations and the boundary conditions for determining them are obtained. Problems for a plate with elliptic elastic inclusions are solved as an example. The numerical results for a plate with one, two elliptical (circular), and linear inclusions are analyzed.
The principle of superposition and its application in ground-water hydraulics
Reilly, T.E.; Franke, O.L.; Bennett, G.D.
1984-01-01
The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)
ERIC Educational Resources Information Center
Podschuweit, Sören; Bernholt, Sascha; Brückmann, Maja
2016-01-01
Background: Complexity models have provided a suitable framework in various domains to assess students' educational achievement. Complexity is often used as the analytical focus when regarding learning outcomes, i.e. when analyzing written tests or problem-centered interviews. Numerous studies reveal negative correlations between the complexity of…
Problem Solving and Comprehension. Third Edition.
ERIC Educational Resources Information Center
Whimbey, Arthur; Lochhead, Jack
This book is directed toward increasing students' ability to analyze problems and comprehend what they read and hear. It outlines and illustrates the methods that good problem solvers use in attacking complex ideas, and provides practice in applying these methods to a variety of questions involving comprehension and reasoning. Chapter I includes a…
Understanding the determinants of problem-solving behavior in a complex environment
NASA Technical Reports Server (NTRS)
Casner, Stephen A.
1994-01-01
It is often argued that problem-solving behavior in a complex environment is determined as much by the features of the environment as by the goals of the problem solver. This article explores a technique to determine the extent to which measured features of a complex environment influence problem-solving behavior observed within that environment. In this study, the technique is used to determine how complex flight deck and air traffic control environment influences the strategies used by airline pilots when controlling the flight path of a modern jetliner. Data collected aboard 16 commercial flights are used to measure selected features of the task environment. A record of the pilots' problem-solving behavior is analyzed to determine to what extent behavior is adapted to the environmental features that were measured. The results suggest that the measured features of the environment account for as much as half of the variability in the pilots' problem-solving behavior and provide estimates on the probable effects of each environmental feature.
ERIC Educational Resources Information Center
Hsu, Hui-Yu; Silver, Edward A.
2014-01-01
We examined geometric calculation with number tasks used within a unit of geometry instruction in a Taiwanese classroom, identifying the source of each task used in classroom instruction and analyzing the cognitive complexity of each task with respect to 2 distinct features: diagram complexity and problem-solving complexity. We found that…
An evaluation of superminicomputers for thermal analysis
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Vidal, J. B.; Jones, G. K.
1982-01-01
The use of superminicomputers for solving a series of increasingly complex thermal analysis problems is investigated. The approach involved (1) installation and verification of the SPAR thermal analyzer software on superminicomputers at Langley Research Center and Goddard Space Flight Center, (2) solution of six increasingly complex thermal problems on this equipment, and (3) comparison of solution (accuracy, CPU time, turnaround time, and cost) with solutions on large mainframe computers.
Problem Solving & Comprehension. Fourth Edition.
ERIC Educational Resources Information Center
Whimbey, Arthur; Lochhead, Jack
This book shows how to increase one's power to analyze and comprehend problems. First, it outlines and illustrates the methods that good problem solvers use in attacking complex ideas. Then it gives some practice in applying these methods to a variety of questions in comprehension and reasoning. Chapters include: (1) "Test Your Mind--See How…
ERIC Educational Resources Information Center
Koberg, Don; Bagnall, Jim
This publication provides an organizational scheme for a creative problem solving process. The authors indicate that all problems can benefit from the same logical and orderly process now employed to solve many complex problems. The principles remain constant; only specific methods change. Chapter 1 analyzes the development of creativity and fear…
Interdisciplinary Analysis and Global Policy Studies.
ERIC Educational Resources Information Center
Meeks, Philip
This paper examines ways in which interdisciplinary and multidisciplinary analysis of global policy studies can increase understanding of complex global problems. Until recently, social science has been the discipline most often turned to for techniques and methodology to analyze social problems and behaviors. However, because social science…
Assessing Design Activity in Complex CMOS Circuit Design.
ERIC Educational Resources Information Center
Biswas, Gautam; And Others
This report characterizes human problem solving in digital circuit design. Protocols of 11 different designers with varying degrees of training were analyzed by identifying the designers' problem solving strategies and discussing activity patterns that differentiate the designers. These methods are proposed as a tentative basis for assessing…
The Quiet Revolution in Land Use Control.
ERIC Educational Resources Information Center
Bosselman, Fred; Callies, David
The Council on Environmental Quality commissioned this report on the innovative land use laws of several states to learn how some of the most complex land use issues and problems of re-allocating responsibilities between state and local governments are being addressed. Many of the laws analyzed are designed to deal with problems that are treated…
ERIC Educational Resources Information Center
Buxton, Cory A.; Salinas, Alejandra; Mahotiere, Margarette; Lee, Okhee; Secada, Walter G.
2013-01-01
Grounded in teacher professional development addressing the intersection of student diversity and content area instruction, this study examined school teachers' pedagogical reasoning complexity as they reflected on their second language learners' science problem solving abilities using both home and school contexts. Teachers responded to interview…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William; Laird, Carl; Siirola, John
Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.
Modeling Security Bridge Certificate Authority Architecture
NASA Astrophysics Data System (ADS)
Ren, Yizhi; Li, Mingchu; Sakurai, Kouichi
Current Public Key Infrastructures suffer from a scaling problem, and some may have security problems, even given the topological simplification of bridge certification authorities. This paper analyzes the security problems in Bridge Certificate Authorities (BCA) model by using the concept of “impersonation risk, ” and proposes a new modified BCA model, which enhances its security, but is a bit more complex incertification path building and implementation than the existing one.
Johnston, Lee M; Matteson, Carrie L; Finegood, Diane T
2014-07-01
We demonstrate the use of a systems-based framework to assess solutions to complex health problems such as obesity. We coded 12 documents published between 2004 and 2013 aimed at influencing obesity planning for complex systems design (9 reports from US and Canadian governmental or health authorities, 1 Cochrane review, and 2 Institute of Medicine reports). We sorted data using the intervention-level framework (ILF), a novel solutions-oriented approach to complex problems. An in-depth comparison of 3 documents provides further insight into complexity and systems design in obesity policy. The majority of strategies focused mainly on changing the determinants of energy imbalance (food intake and physical activity). ILF analysis brings to the surface actions aimed at higher levels of system function and points to a need for more innovative policy design. Although many policymakers acknowledge obesity as a complex problem, many strategies stem from the paradigm of individual choice and are limited in scope. The ILF provides a template to encourage natural systems thinking and more strategic policy design grounded in complexity science.
Mathematical Models to Determine Stable Behavior of Complex Systems
NASA Astrophysics Data System (ADS)
Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.
2018-05-01
The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.
Fast angular synchronization for phase retrieval via incomplete information
NASA Astrophysics Data System (ADS)
Viswanathan, Aditya; Iwen, Mark
2015-08-01
We consider the problem of recovering the phase of an unknown vector, x ∈ ℂd, given (normalized) phase difference measurements of the form xjxk*/|xjxk*|, j,k ∈ {1,...,d}, and where xj* denotes the complex conjugate of xj. This problem is sometimes referred to as the angular synchronization problem. This paper analyzes a linear-time-in-d eigenvector-based angular synchronization algorithm and studies its theoretical and numerical performance when applied to a particular class of highly incomplete and possibly noisy phase difference measurements. Theoretical results are provided for perfect (noiseless) measurements, while numerical simulations demonstrate the robustness of the method to measurement noise. Finally, we show that this angular synchronization problem and the specific form of incomplete phase difference measurements considered arise in the phase retrieval problem - where we recover an unknown complex vector from phaseless (or magnitude) measurements.
Xu, Wei
2007-12-01
This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.
ERIC Educational Resources Information Center
Kopp, Birgitta; Hasenbein, Melanie; Mandl, Heinz
2014-01-01
This article analyzes the collaborative problem solving activities and learning outcomes of five groups that worked on two different complex cases in a virtual professional training course. In this asynchronous virtual learning environment, all knowledge management content was delivered virtually and collaboration took place through forums. To…
ERIC Educational Resources Information Center
Hill, George B.; Sweeney, Joseph B.
2015-01-01
Reaction workup can be a complex problem for those facing novel synthesis of difficult compounds for the first time. Workup problem solving by systematic thinking should be inculcated as mid-graduate-level is reached. A structured approach is proposed, building decision tree flowcharts to analyze challenges, and an exemplar flowchart is presented…
Separating the Problem and the Person: Insights from Narrative Therapy with People Who Stutter
ERIC Educational Resources Information Center
Ryan, Fiona; O'Dwyer, Mary; Leahy, Margaret M.
2015-01-01
Stuttering is a complex disorder of speech that encompasses motor speech and emotional and cognitive factors. The use of narrative therapy is described here, focusing on the stories that clients tell about the problems associated with stuttering that they have encountered in their lives. Narrative therapy uses these stories to understand, analyze,…
A multiagent evolutionary algorithm for constraint satisfaction problems.
Liu, Jing; Zhong, Weicai; Jiao, Licheng
2006-02-01
With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.
Computation and visualization of geometric partial differential equations
NASA Astrophysics Data System (ADS)
Tiee, Christopher L.
The chief goal of this work is to explore a modern framework for the study and approximation of partial differential equations, recast common partial differential equations into this framework, and prove theorems about such equations and their approximations. A central motivation is to recognize and respect the essential geometric nature of such problems, and take it into consideration when approximating. The hope is that this process will lead to the discovery of more refined algorithms and processes and apply them to new problems. In the first part, we introduce our quantities of interest and reformulate traditional boundary value problems in the modern framework. We see how Hilbert complexes capture and abstract the most important properties of such boundary value problems, leading to generalizations of important classical results such as the Hodge decomposition theorem. They also provide the proper setting for numerical approximations. We also provide an abstract framework for evolution problems in these spaces: Bochner spaces. We next turn to approximation. We build layers of abstraction, progressing from functions, to differential forms, and finally, to Hilbert complexes. We explore finite element exterior calculus (FEEC), which allows us to approximate solutions involving differential forms, and analyze the approximation error. In the second part, we prove our central results. We first prove an extension of current error estimates for the elliptic problem in Hilbert complexes. This extension handles solutions with nonzero harmonic part. Next, we consider evolution problems in Hilbert complexes and prove abstract error estimates. We apply these estimates to the problem for Riemannian hypersurfaces in R. {n+1},generalizing current results for open subsets of R. {n}. Finally, we applysome of the concepts to a nonlinear problem, the Ricci flow on surfaces, and use tools from nonlinear analysis to help develop and analyze the equations. In the appendices, we detail some additional motivation and a source for further examples: canonical geometries that are realized as steady-state solutions to parabolic equations similar to that of Ricci flow. An eventual goal is to compute such solutions using the methods of the previous chapters.
Complex Langevin method: When can it be trusted?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aarts, Gert; Seiler, Erhard; Stamatescu, Ion-Olimpiu
2010-03-01
We analyze to what extent the complex Langevin method, which is in principle capable of solving the so-called sign problems, can be considered as reliable. We give a formal derivation of the correctness and then point out various mathematical loopholes. The detailed study of some simple examples leads to practical suggestions about the application of the method.
Using Networks to Visualize and Analyze Process Data for Educational Assessment
ERIC Educational Resources Information Center
Zhu, Mengxiao; Shu, Zhan; von Davier, Alina A.
2016-01-01
New technology enables interactive and adaptive scenario-based tasks (SBTs) to be adopted in educational measurement. At the same time, it is a challenging problem to build appropriate psychometric models to analyze data collected from these tasks, due to the complexity of the data. This study focuses on process data collected from SBTs. We…
NASA Technical Reports Server (NTRS)
Parnell, Gregory S.; Rowell, William F.; Valusek, John R.
1987-01-01
In recent years there has been increasing interest in applying the computer based problem solving techniques of Artificial Intelligence (AI), Operations Research (OR), and Decision Support Systems (DSS) to analyze extremely complex problems. A conceptual framework is developed for successfully integrating these three techniques. First, the fields of AI, OR, and DSS are defined and the relationships among the three fields are explored. Next, a comprehensive adaptive design methodology for AI and OR modeling within the context of a DSS is described. These observations are made: (1) the solution of extremely complex knowledge problems with ill-defined, changing requirements can benefit greatly from the use of the adaptive design process, (2) the field of DSS provides the focus on the decision making process essential for tailoring solutions to these complex problems, (3) the characteristics of AI, OR, and DSS tools appears to be converging rapidly, and (4) there is a growing need for an interdisciplinary AI/OR/DSS education.
GUIDELINES TO ASSESSING REGIONAL VULNERABILITIES
Decision-makers today face increasingly complex environmental problems that require integrative and innovative approaches for analyzing, modeling, and interpreting various types of information. ReVA acknowledges this need and is designed to evaluate methods and models for synthe...
Individual Differences in Strategy Use on Division Problems: Mental versus Written Computation
ERIC Educational Resources Information Center
Hickendorff, Marian; van Putten, Cornelis M.; Verhelst, Norman D.; Heiser, Willem J.
2010-01-01
Individual differences in strategy use (choice and accuracy) were analyzed. A sample of 362 Grade 6 students solved complex division problems under 2 different conditions. In the choice condition students were allowed to use either a mental or a written strategy. In the subsequent no-choice condition, they were required to use a written strategy.…
The bright side of being blue: Depression as an adaptation for analyzing complex problems
Andrews, Paul W.; Thomson, J. Anderson
2009-01-01
Depression ranks as the primary emotional problem for which help is sought. Depressed people often have severe, complex problems, and rumination is a common feature. Depressed people often believe that their ruminations give them insight into their problems, but clinicians often view depressive rumination as pathological because it is difficult to disrupt and interferes with the ability to concentrate on other things. Abundant evidence indicates that depressive rumination involves the analysis of episode-related problems. Because analysis is time consuming and requires sustained processing, disruption would interfere with problem-solving. The analytical rumination (AR) hypothesis proposes that depression is an adaptation that evolved as a response to complex problems and whose function is to minimize disruption of rumination and sustain analysis of complex problems. It accomplishes this by giving episode-related problems priority access to limited processing resources, by reducing the desire to engage in distracting activities (anhedonia), and by producing psychomotor changes that reduce exposure to distracting stimuli. Because processing resources are limited, the inability to concentrate on other things is a tradeoff that must be made to sustain analysis of the triggering problem. The AR hypothesis is supported by evidence from many levels, including genes, neurotransmitters and their receptors, neurophysiology, neuroanatomy, neuroenergetics, pharmacology, cognition and behavior, and the efficacy of treatments. In addition, we address and provide explanations for puzzling findings in the cognitive and behavioral genetics literatures on depression. In the process, we challenge the belief that serotonin transmission is low in depression. Finally, we discuss implications of the hypothesis for understanding and treating depression. PMID:19618990
NASA Astrophysics Data System (ADS)
Yasami, Yasser; Safaei, Farshad
2018-02-01
The traditional complex network theory is particularly focused on network models in which all network constituents are dealt with equivalently, while fail to consider the supplementary information related to the dynamic properties of the network interactions. This is a main constraint leading to incorrect descriptions of some real-world phenomena or incomplete capturing the details of certain real-life problems. To cope with the problem, this paper addresses the multilayer aspects of dynamic complex networks by analyzing the properties of intrinsically multilayered co-authorship networks, DBLP and Astro Physics, and presenting a novel multilayer model of dynamic complex networks. The model examines the layers evolution (layers birth/death process and lifetime) throughout the network evolution. Particularly, this paper models the evolution of each node's membership in different layers by an Infinite Factorial Hidden Markov Model considering feature cascade, and thereby formulates the link generation process for intra-layer and inter-layer links. Although adjacency matrixes are useful to describe the traditional single-layer networks, such a representation is not sufficient to describe and analyze the multilayer dynamic networks. This paper also extends a generalized mathematical infrastructure to address the problems issued by multilayer complex networks. The model inference is performed using some Markov Chain Monte Carlo sampling strategies, given synthetic and real complex networks data. Experimental results indicate a tremendous improvement in the performance of the proposed multilayer model in terms of sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, F1-score, Matthews correlation coefficient, and accuracy for two important applications of missing link prediction and future link forecasting. The experimental results also indicate the strong predictivepower of the proposed model for the application of cascade prediction in terms of accuracy.
NASA Astrophysics Data System (ADS)
Kozlovskaya, E. N.; Doroshenko, I. Yu.; Pogorelov, V. E.; Vaskivskyi, Ye. V.; Pitsevich, G. A.
2018-01-01
Previously calculated multidimensional potential-energy surfaces of the MeOH monomer and dimer, water dimer, malonaldehyde, formic acid dimer, free pyridine-N-oxide/trichloroacetic acid complex, and protonated water dimer were analyzed. The corresponding harmonic potential-energy surfaces near the global minima were constructed for series of clusters and complexes with hydrogen bonds of different strengths based on the behavior of the calculated multidimensional potential-energy surfaces. This enabled the introduction of an obvious anharmonicity parameter for the calculated potential-energy surfaces. The anharmonicity parameter was analyzed as functions of the size of the analyzed area near the energy minimum, the number of points over which energies were compared, and the dimensionality of the solved vibrational problem. Anharmonicity parameters for potential-energy surfaces in complexes with strong, medium, and weak H-bonds were calculated under identical conditions. The obtained anharmonicity parameters were compared with the corresponding diagonal anharmonicity constants for stretching vibrations of the bridging protons and the lengths of the hydrogen bridges.
Walters, William J; Christensen, Villy
2018-01-01
Ecotracer is a tool in the Ecopath with Ecosim (EwE) software package used to simulate and analyze the transport of contaminants such as methylmercury or radiocesium through aquatic food webs. Ecotracer solves the contaminant dynamic equations simultaneously with the biomass dynamic equations in Ecosim/Ecospace. In this paper, we give a detailed description of the Ecotracer module and analyze the performance on two problems of differing complexity. Ecotracer was modified from previous versions to more accurately model contaminant excretion, and new numerical integration algorithms were implemented to increase accuracy and robustness. To test the mathematical robustness of the computational algorithm, Ecotracer was tested on a simple problem for which we know an analytical solution. These results demonstrated the effectiveness of the program numerics. A much more complex model, the release of the cesium radionuclide 137 Cs from the Fukushima Dai-ichi nuclear accident, was also modeled and analyzed. A comparison of the Ecotracer results to sampled 137 Cs measurements in the coastal ocean area around Fukushima show the promise of the tool but also highlight some important limitations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Cross-national comparisons of complex problem-solving strategies in two microworlds.
Güss, C Dominik; Tuason, Ma Teresa; Gerhard, Christiane
2010-04-01
Research in the fields of complex problem solving (CPS) and dynamic decision making using microworlds has been mainly conducted in Western industrialized countries. This study analyzes the CPS process by investigating thinking-aloud protocols in five countries. Participants were 511 students from Brazil, Germany, India, the Philippines, and the United States who worked on two microworlds. On the basis of cultural-psychological theories, specific cross-national differences in CPS strategies were hypothesized. Following theories of situatedness of cognition, hypotheses about the specific frequency of problem-solving strategies in the two microworlds were developed. Results of the verbal protocols showed (a) modification of the theoretical CPS model, (b) task dependence of CPS strategies, and (c) cross-national differences in CPS strategies. Participants' CPS processes were particularly influenced by country-specific problem-solving strategies. Copyright © 2009 Cognitive Science Society, Inc.
Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.
Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk
2017-01-01
Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.
On the complexity and approximability of some Euclidean optimal summing problems
NASA Astrophysics Data System (ADS)
Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.
2016-10-01
The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.
Representation of complex probabilities and complex Gibbs sampling
NASA Astrophysics Data System (ADS)
Salcedo, Lorenzo Luis
2018-03-01
Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.
INTEGRATING THE SCIENCE AND TECHNOLOGY OF ENVIRONMENTAL ASSESSMENT ACROSS FEDERAL AGENCIES
Seven Federal Agencies are conducting collaborative research to provide the next generation of environmental models for analyzing complex multimedia, multi-stressor contamination problems. Among the primary objectives of the Memorandum of Understanding (MOU) are 1) to provide a ...
Complex collaborative problem-solving processes in mission control.
Fiore, Stephen M; Wiltshire, Travis J; Oglesby, James M; O'Keefe, William S; Salas, Eduardo
2014-04-01
NASA's Mission Control Center (MCC) is responsible for control of the International Space Station (ISS), which includes responding to problems that obstruct the functioning of the ISS and that may pose a threat to the health and well-being of the flight crew. These problems are often complex, requiring individuals, teams, and multiteam systems, to work collaboratively. Research is warranted to examine individual and collaborative problem-solving processes in this context. Specifically, focus is placed on how Mission Control personnel-each with their own skills and responsibilities-exchange information to gain a shared understanding of the problem. The Macrocognition in Teams Model describes the processes that individuals and teams undertake in order to solve problems and may be applicable to Mission Control teams. Semistructured interviews centering on a recent complex problem were conducted with seven MCC professionals. In order to assess collaborative problem-solving processes in MCC with those predicted by the Macrocognition in Teams Model, a coding scheme was developed to analyze the interview transcriptions. Findings are supported with excerpts from participant transcriptions and suggest that team knowledge-building processes accounted for approximately 50% of all coded data and are essential for successful collaborative problem solving in mission control. Support for the internalized and externalized team knowledge was also found (19% and 20%, respectively). The Macrocognition in Teams Model was shown to be a useful depiction of collaborative problem solving in mission control and further research with this as a guiding framework is warranted.
NASA Astrophysics Data System (ADS)
Alpers, Andreas; Gritzmann, Peter
2018-03-01
We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.
The principle of superposition and its application in ground-water hydraulics
Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.
1987-01-01
The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.
Using machine-learning methods to analyze economic loss function of quality management processes
NASA Astrophysics Data System (ADS)
Dzedik, V. A.; Lontsikh, P. A.
2018-05-01
During analysis of quality management systems, their economic component is often analyzed insufficiently. To overcome this issue, it is necessary to withdraw the concept of economic loss functions from tolerance thinking and address it. Input data about economic losses in processes have a complex form, thus, using standard tools to solve this problem is complicated. Use of machine learning techniques allows one to obtain precise models of the economic loss function based on even the most complex input data. Results of such analysis contain data about the true efficiency of a process and can be used to make investment decisions.
NASA Astrophysics Data System (ADS)
Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.
2012-11-01
Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Predictability of Extreme Climate Events via a Complex Network Approach
NASA Astrophysics Data System (ADS)
Muhkin, D.; Kurths, J.
2017-12-01
We analyse climate dynamics from a complex network approach. This leads to an inverse problem: Is there a backbone-like structure underlying the climate system? For this we propose a method to reconstruct and analyze a complex network from data generated by a spatio-temporal dynamical system. This approach enables us to uncover relations to global circulation patterns in oceans and atmosphere. This concept is then applied to Monsoon data; in particular, we develop a general framework to predict extreme events by combining a non-linear synchronization technique with complex networks. Applying this method, we uncover a new mechanism of extreme floods in the eastern Central Andes which could be used for operational forecasts. Moreover, we analyze the Indian Summer Monsoon (ISM) and identify two regions of high importance. By estimating an underlying critical point, this leads to an improved prediction of the onset of the ISM; this scheme was successful in 2016 and 2017.
[Scientific and methodologic approaches to evaluating medical management for workers of Kazakhstan].
2012-01-01
The article covers topical problems of workers' health preservation. Complex research results enabled to evaluate and analyze occupational risks in leading industries of Kazakhstan, for improving scientific and methodologic approaches to medical management for workers subjected to hazardous conditions.
R&D 100, 2016: Pyomo 4.0 â Python Optimization Modeling Objects
Hart, William; Laird, Carl; Siirola, John
2018-06-13
Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.
Complex Problem Solving in Teams: The Impact of Collective Orientation on Team Process Demands.
Hagemann, Vera; Kluge, Annette
2017-01-01
Complex problem solving is challenging and a high-level cognitive process for individuals. When analyzing complex problem solving in teams, an additional, new dimension has to be considered, as teamwork processes increase the requirements already put on individual team members. After introducing an idealized teamwork process model, that complex problem solving teams pass through, and integrating the relevant teamwork skills for interdependently working teams into the model and combining it with the four kinds of team processes (transition, action, interpersonal, and learning processes), the paper demonstrates the importance of fulfilling team process demands for successful complex problem solving within teams. Therefore, results from a controlled team study within complex situations are presented. The study focused on factors that influence action processes, like coordination, such as emergent states like collective orientation, cohesion, and trust and that dynamically enable effective teamwork in complex situations. Before conducting the experiments, participants were divided by median split into two-person teams with either high ( n = 58) or low ( n = 58) collective orientation values. The study was conducted with the microworld C3Fire, simulating dynamic decision making, and acting in complex situations within a teamwork context. The microworld includes interdependent tasks such as extinguishing forest fires or protecting houses. Two firefighting scenarios had been developed, which takes a maximum of 15 min each. All teams worked on these two scenarios. Coordination within the team and the resulting team performance were calculated based on a log-file analysis. The results show that no relationships between trust and action processes and team performance exist. Likewise, no relationships were found for cohesion. Only collective orientation of team members positively influences team performance in complex environments mediated by action processes such as coordination within the team. The results are discussed in relation to previous empirical findings and to learning processes within the team with a focus on feedback strategies.
Complex Problem Solving in Teams: The Impact of Collective Orientation on Team Process Demands
Hagemann, Vera; Kluge, Annette
2017-01-01
Complex problem solving is challenging and a high-level cognitive process for individuals. When analyzing complex problem solving in teams, an additional, new dimension has to be considered, as teamwork processes increase the requirements already put on individual team members. After introducing an idealized teamwork process model, that complex problem solving teams pass through, and integrating the relevant teamwork skills for interdependently working teams into the model and combining it with the four kinds of team processes (transition, action, interpersonal, and learning processes), the paper demonstrates the importance of fulfilling team process demands for successful complex problem solving within teams. Therefore, results from a controlled team study within complex situations are presented. The study focused on factors that influence action processes, like coordination, such as emergent states like collective orientation, cohesion, and trust and that dynamically enable effective teamwork in complex situations. Before conducting the experiments, participants were divided by median split into two-person teams with either high (n = 58) or low (n = 58) collective orientation values. The study was conducted with the microworld C3Fire, simulating dynamic decision making, and acting in complex situations within a teamwork context. The microworld includes interdependent tasks such as extinguishing forest fires or protecting houses. Two firefighting scenarios had been developed, which takes a maximum of 15 min each. All teams worked on these two scenarios. Coordination within the team and the resulting team performance were calculated based on a log-file analysis. The results show that no relationships between trust and action processes and team performance exist. Likewise, no relationships were found for cohesion. Only collective orientation of team members positively influences team performance in complex environments mediated by action processes such as coordination within the team. The results are discussed in relation to previous empirical findings and to learning processes within the team with a focus on feedback strategies. PMID:29033886
Berry, Roberta M; Borenstein, Jason; Butera, Robert J
2013-06-01
This manuscript describes a pilot study in ethics education employing a problem-based learning approach to the study of novel, complex, ethically fraught, unavoidably public, and unavoidably divisive policy problems, called "fractious problems," in bioscience and biotechnology. Diverse graduate and professional students from four US institutions and disciplines spanning science, engineering, humanities, social science, law, and medicine analyzed fractious problems employing "navigational skills" tailored to the distinctive features of these problems. The students presented their results to policymakers, stakeholders, experts, and members of the public. This approach may provide a model for educating future bioscientists and bioengineers so that they can meaningfully contribute to the social understanding and resolution of challenging policy problems generated by their work.
da Silva, Weliton José; Jahn, Regine; Ludwig, Thelma Alvim Veiga; Hinz, Friedel; Menezes, Mariângela
2015-01-01
Abstract Specimens belonging to the Cymbella affinis / Cymbella tumidula / Cymbella turgidula species complex have many taxonomic problems, due to their high morphological variability and lack of type designations. Fifteen taxon names of this complex, distributed in five species, were re-evaluated concerning their taxonomic status, and lectotypified based on original material. In addition to light microscopy, some material was analyzed by electron microscopy. Four new combinations are proposed in order to reposition infraspecific taxa. PMID:26312038
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Dynamic Control of Plans with Temporal Uncertainty
NASA Technical Reports Server (NTRS)
Morris, Paul; Muscettola, Nicola; Vidal, Thierry
2001-01-01
Certain planning systems that deal with quantitative time constraints have used an underlying Simple Temporal Problem solver to ensure temporal consistency of plans. However, many applications involve processes of uncertain duration whose timing cannot be controlled by the execution agent. These cases require more complex notions of temporal feasibility. In previous work, various "controllability" properties such as Weak, Strong, and Dynamic Controllability have been defined. The most interesting and useful Controllability property, the Dynamic one, has ironically proved to be the most difficult to analyze. In this paper, we resolve the complexity issue for Dynamic Controllability. Unexpectedly, the problem turns out to be tractable. We also show how to efficiently execute networks whose status has been verified.
SWMM 5 - A Case Study of Model Re-Development
By the turn of the 21st century the U.S. Environmental Protection Agency’s (EPA) Storm Water Management Model (SWMM) already had a 30-year history of extensive use throughout the world for analyzing complex hydrologic, hydraulic, and water quality problems related to urban draina...
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
NASA Astrophysics Data System (ADS)
Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.
2015-04-01
This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
Dynamic optimization of chemical processes using ant colony framework.
Rajesh, J; Gupta, K; Kusumakar, H S; Jayaraman, V K; Kulkarni, B D
2001-11-01
Ant colony framework is illustrated by considering dynamic optimization of six important bench marking examples. This new computational tool is simple to implement and can tackle problems with state as well as terminal constraints in a straightforward fashion. It requires fewer grid points to reach the global optimum at relatively very low computational effort. The examples with varying degree of complexities, analyzed here, illustrate its potential for solving a large class of process optimization problems in chemical engineering.
The Problem Child: Provocations toward Dismantling the Carceral State
ERIC Educational Resources Information Center
Meiners, Erica R.
2017-01-01
In this essay Erica R. Meiners argues that those committed to dismantling our nation's deep and racialized investments in policing and imprisoning must analyze how the flexible category of "the child," and its figurative powers, operate in complex ways to punish communities and naturalize and expand criminalization and surveillance.…
Peri-viable birth: legal considerations.
Sayeed, Sadath A
2014-02-01
Peri-viable birth raises an array of complex moral and legal concerns. This article discusses the problem with defining viability, touches on its relationship to abortion jurisprudence, and analyzes a few interesting normative implications of current medical practice at the time of peri-viable birth. Copyright © 2014 Elsevier Inc. All rights reserved.
Case Studies for Educational Leadership: Solving Administrative Dilemmas
ERIC Educational Resources Information Center
Midlock, Stephen F.
2010-01-01
"Case Studies for Educational Leadership" gives educational leadership students an opportunity to project themselves into real-life administrative situations and prepare for their future positions in the field. Each case study contained in this practical first edition book asks students to analyze complex problems, consider the moral ramifications…
Probing the Topological Properties of Complex Networks Modeling Short Written Texts
Amancio, Diego R.
2015-01-01
In recent years, graph theory has been widely employed to probe several language properties. More specifically, the so-called word adjacency model has been proven useful for tackling several practical problems, especially those relying on textual stylistic analysis. The most common approach to treat texts as networks has simply considered either large pieces of texts or entire books. This approach has certainly worked well—many informative discoveries have been made this way—but it raises an uncomfortable question: could there be important topological patterns in small pieces of texts? To address this problem, the topological properties of subtexts sampled from entire books was probed. Statistical analyses performed on a dataset comprising 50 novels revealed that most of the traditional topological measurements are stable for short subtexts. When the performance of the authorship recognition task was analyzed, it was found that a proper sampling yields a discriminability similar to the one found with full texts. Surprisingly, the support vector machine classification based on the characterization of short texts outperformed the one performed with entire books. These findings suggest that a local topological analysis of large documents might improve its global characterization. Most importantly, it was verified, as a proof of principle, that short texts can be analyzed with the methods and concepts of complex networks. As a consequence, the techniques described here can be extended in a straightforward fashion to analyze texts as time-varying complex networks. PMID:25719799
Daxini, S D; Prajapati, J M
2014-01-01
Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.
Data Synchronization Discrepancies in a Formation Flight Control System
NASA Technical Reports Server (NTRS)
Ryan, Jack; Hanson, Curtis E.; Norlin, Ken A.; Allen, Michael J.; Schkolnik, Gerard (Technical Monitor)
2001-01-01
Aircraft hardware-in-the-loop simulation is an invaluable tool to flight test engineers; it reveals design and implementation flaws while operating in a controlled environment. Engineers, however, must always be skeptical of the results and analyze them within their proper context. Engineers must carefully ascertain whether an anomaly that occurs in the simulation will also occur in flight. This report presents a chronology illustrating how misleading simulation timing problems led to the implementation of an overly complex position data synchronization guidance algorithm in place of a simpler one. The report illustrates problems caused by the complex algorithm and how the simpler algorithm was chosen in the end. Brief descriptions of the project objectives, approach, and simulation are presented. The misleading simulation results and the conclusions then drawn are presented. The complex and simple guidance algorithms are presented with flight data illustrating their relative success.
Overview of Aro Program on Network Science for Human Decision Making
NASA Astrophysics Data System (ADS)
West, Bruce J.
This program brings together researchers from disparate disciplines to work on a complex research problem that defies confinement within any single discipline. Consequently, not only are new and rewarding solutions sought and obtained for a problem of importance to society and the Army, that is, the human dimension of complex networks, but, in addition, collaborations are established that would not otherwise have formed given the traditional disciplinary compartmentalization of research. This program develops the basic research foundation of a science of networks supporting the linkage between the physical and human (cognitive and social) domains as they relate to human decision making. The strategy is to extend the recent methods of non-equilibrium statistical physics to non-stationary, renewal stochastic processes that appear to be characteristic of the interactions among nodes in complex networks. We also pursue understanding of the phenomenon of synchronization, whose mathematical formulation has recently provided insight into how complex networks reach accommodation and cooperation. The theoretical analyses of complex networks, although mathematically rigorous, often elude analytic solutions and require computer simulation and computation to analyze the underlying dynamic process.
Flight-deck automation - Promises and problems
NASA Technical Reports Server (NTRS)
Wiener, E. L.; Curry, R. E.
1980-01-01
The paper analyzes the role of human factors in flight-deck automation, identifies problem areas, and suggests design guidelines. Flight-deck automation using microprocessor technology and display systems improves performance and safety while leading to a decrease in size, cost, and power consumption. On the other hand negative factors such as failure of automatic equipment, automation-induced error compounded by crew error, crew error in equipment set-up, failure to heed automatic alarms, and loss of proficiency must also be taken into account. Among the problem areas discussed are automation of control tasks, monitoring of complex systems, psychosocial aspects of automation, and alerting and warning systems. Guidelines are suggested for designing, utilising, and improving control and monitoring systems. Investigation into flight-deck automation systems is important as the knowledge gained can be applied to other systems such as air traffic control and nuclear power generation, but the many problems encountered with automated systems need to be analyzed and overcome in future research.
Analyzing and Detecting Problems in Systems of Systems
NASA Technical Reports Server (NTRS)
Lindvall, Mikael; Ackermann, Christopher; Stratton, William C.; Sibol, Deane E.; Godfrey, Sally
2008-01-01
Many software systems are evolving complex system of systems (SoS) for which inter-system communication is mission-critical. Evidence indicates that transmission failures and performance issues are not uncommon occurrences. In a NASA-supported Software Assurance Research Program (SARP) project, we are researching a new approach addressing such problems. In this paper, we are presenting an approach for analyzing inter-system communications with the goal to uncover both transmission errors and performance problems. Our approach consists of a visualization and an evaluation component. While the visualization of the observed communication aims to facilitate understanding, the evaluation component automatically checks the conformance of an observed communication (actual) to a desired one (planned). The actual and the planned are represented as sequence diagrams. The evaluation algorithm checks the conformance of the actual to the planned diagram. We have applied our approach to the communication of aerospace systems and were successful in detecting and resolving even subtle and long existing transmission problems.
Mathematical concepts for modeling human behavior in complex man-machine systems
NASA Technical Reports Server (NTRS)
Johannsen, G.; Rouse, W. B.
1979-01-01
Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications
Stoppe, Jannis; Drechsler, Rolf
2015-01-01
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC. PMID:25946632
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications.
Stoppe, Jannis; Drechsler, Rolf
2015-05-04
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC.
A Riemann-Hilbert Approach to Complex Sharma-Tasso-Olver Equation on Half Line*
NASA Astrophysics Data System (ADS)
Zhang, Ning; Xia, Tie-Cheng; Hu, Bei-Bei
2017-11-01
In this paper, the Fokas unified method is used to analyze the initial-boundary value problem of a complex Sharma-Tasso-Olver (cSTO) equation on the half line. We show that the solution can be expressed in terms of the solution of a Riemann-Hilbert problem. The relevant jump matrices are explicitly given in terms of the matrix-value spectral functions spectral functions \\{a(λ ),b(λ )\\} and \\{A(λ ),B(λ )\\} , which depending on initial data {u}0(x)=u(x,0) and boundary data {g}0(y)=u(0,y), {g}1(y)={u}x(0,y), {g}2(y)={u}{xx}(0,y). These spectral functions are not independent, they satisfy a global relation.
Bilodeau, Angèle; Beauchemin, Jean; Bourque, Denis; Galarneau, Marilène
2013-02-11
Based on a theory of intervention as a complex action system, analyze collaboration among partners in Montréal's sexually transmitted and blood-borne infections (STBBI) prevention program to identify main operations problems and possible scenarios for change to achieve better outcomes. A descriptive study was conducted using three data sources - public policies and programs, system management documents, and interviews with three types of partners. The results were validated with stakeholders. Five main operations problems affecting the capacity of the system to provide expected services were identified, as well as strategies the partners use to address these. Two scenarios for system change to increase its effectiveness in achieving program goals are discussed.
Modern technologies of processing municipal solid waste: investing in the future
NASA Astrophysics Data System (ADS)
Rumyantseva, A.; Berezyuk, M.; Savchenko, N.; Rumyantseva, E.
2017-06-01
The problem of effective municipal solid waste (MSW) management is known to all the municipal entities of the Russian Federation. The problem is multifaceted and complex. The article analyzes the dynamics of municipal solid waste formation and its utilization within the territory of the EU and Russia. The authors of the paper suggest a project of a plant for processing municipal solid waste into a combustible gas with the help of high temperature pyrolysis. The main indicators of economic efficiency are calculated.
The three-wave equation on the half-line
NASA Astrophysics Data System (ADS)
Xu, Jian; Fan, Engui
2014-01-01
The Fokas method is used to analyze the initial-boundary value problem for the three-wave equation p-{bi-bj}/{ai-aj}p+∑k ({bk-bj}/{ak-aj}-{bi-bk}/{ai-ak})pp=0, i,j,k=1,2,3, on the half-line. Assuming that the solution p(x,t) exists, we show that it can be recovered from its initial and boundary values via the solution of a Riemann-Hilbert problem formulated in the plane of the complex spectral parameter λ.
Computational complexity in entanglement transformations
NASA Astrophysics Data System (ADS)
Chitambar, Eric A.
In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are obtained by encoding well-studied computational problems such as polynomial identity testing and tensor rank into questions of entanglement transformation. In this way, entanglement theory provides a physical manifestation of some of the most puzzling and abstract classical computation questions.
Cook, Daniel L; Farley, Joel F; Tapscott, Stephen J
2001-01-01
Background: We propose that a computerized, internet-based graphical description language for systems biology will be essential for describing, archiving and analyzing complex problems of biological function in health and disease. Results: We outline here a conceptual basis for designing such a language and describe BioD, a prototype language that we have used to explore the utility and feasibility of this approach to functional biology. Using example models, we demonstrate that a rather limited lexicon of icons and arrows suffices to describe complex cell-biological systems as discrete models that can be posted and linked on the internet. Conclusions: Given available computer and internet technology, BioD may be implemented as an extensible, multidisciplinary language that can be used to archive functional systems knowledge and be extended to support both qualitative and quantitative functional analysis. PMID:11305940
NASA Astrophysics Data System (ADS)
Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd
2017-10-01
Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.
Cold Agglutinin Disease; A Laboratory Challenge.
Nikousefat, Zahra; Javdani, Moosa; Hashemnia, Mohammad; Haratyan, Abbas; Jalili, Ali
2015-10-01
Autoimmune haemolytic anemia (AIHA) is a complex process characterized by an immune reaction against red blood cell self-antigens. The analysis of specimens, drawn from patients with cold auto-immune hemolytic anemia is a difficult problem for automated hematology analyzer. This paper was written to alert technologists and pathologists to the presence of cold agglutinins and its effect on laboratory tests. A 72-year-old female presented to the Shafa laboratory for hematology profile evaluation. CBC indices showed invalid findings with the Sysmex automated hematology analyzer. Checking the laboratory process showed precipitation residue sticking to the sides of the tube. After warming the tubes, results become valid and the problem attributed to cold agglutinin disease. In this situation, aggregation of RBCs, which occurs at t < 30°C, causes invalid findings meanwhile working with automated hematology analyzer. Knowledge of this phenomenon can help prevent wasting too much time and make an early and accurate diagnosis.
Analyzing public health policy: three approaches.
Coveney, John
2010-07-01
Policy is an important feature of public and private organizations. Within the field of health as a policy arena, public health has emerged in which policy is vital to decision making and the deployment of resources. Public health practitioners and students need to be able to analyze public health policy, yet many feel daunted by the subject's complexity. This article discusses three approaches that simplify policy analysis: Bacchi's "What's the problem?" approach examines the way that policy represents problems. Colebatch's governmentality approach provides a way of analyzing the implementation of policy. Bridgman and Davis's policy cycle allows for an appraisal of public policy development. Each approach provides an analytical framework from which to rigorously study policy. Practitioners and students of public health gain much in engaging with the politicized nature of policy, and a simple approach to policy analysis can greatly assist one's understanding and involvement in policy work.
The Applied Mathematics for Power Systems (AMPS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less
Power, Gerald; Miller, Anne
2007-01-01
Abstract: Cardiopulmonary bypass (CPB) is a complex task requiring high levels of practitioner expertise. Although some education standards exist, few are based on an analysis of perfusionists’ problem-solving needs. This study shows the efficacy of work domain analysis (WDA) as a framework for analyzing perfusionists’ conceptualization and problem-solving strategies. A WDA model of a CPB circuit was developed. A high-fidelity CPB simulator (Manbit) was used to present routine and oxygenator failure scenarios to six proficient perfusionists. The video-cued recall technique was used to elicit perfusionists’ conceptualization strategies. The resulting recall transcripts were coded using the WDA model and analyzed for associations between task completion times and patterns of conceptualization. The WDA model developed was successful in being able to account for and describe the thought process followed by each participant. It was also shown that, although there was no correlation between experience with CPB and ability to change an oxygenator, there was a link between the between specific thought patterns and the efficiency in undertaking this task. Simulators are widely used in many fields of human endeavor, and in this research, the attempt was made to use WDA to gain insights into the complexities of the human thought process when engaged in the complex task of conducting CPB. The assumption that experience equates with ability is challenged, and rather, it is shown that thought process is a more significant determinant of success when engaged in complex tasks. WDA analysis in combination with a CPB simulator may be used to elucidate successful strategies for completing complex tasks. PMID:17972450
Problem-Solving Test: Targeted Gene Disruption
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2008-01-01
Mutational inactivation of a specific gene is the most powerful technique to analyze the biological function of the gene. This approach has been used for a long time in viruses, bacteria, yeast, and fruit fly, but looked quite hopeless in more complex organisms. Targeted inactivation of specific genes (also known as knock-out mutation) in mice is…
ERIC Educational Resources Information Center
Kinnebrew, John S.; Segedy, James R.; Biswas, Gautam
2017-01-01
Research in computer-based learning environments has long recognized the vital role of adaptivity in promoting effective, individualized learning among students. Adaptive scaffolding capabilities are particularly important in open-ended learning environments, which provide students with opportunities for solving authentic and complex problems, and…
ERIC Educational Resources Information Center
Kerr, Deirdre
2014-01-01
Educational video games provide an opportunity for students to interact with and explore complex representations of academic content and allow for the examination of problem-solving strategies and mistakes that can be difficult to capture in more traditional environments. However, data from such games are notoriously difficult to analyze. This…
Interdisciplinarity: Wishful Thinking? Experiences at the University of Graz
ERIC Educational Resources Information Center
Bader, Lena; Zotter, Victoria
2012-01-01
Purpose: Interdisciplinarity is necessary to explain or solve complex problems and questions of today's world. Therefore, it's important to analyze the situation within an educational institution to get to know what students think about interdisciplinary work. The purpose of this paper is to try to achieve an insight into how e-learning…
RAND's Impact in the Middle East. Corporate Publication
ERIC Educational Resources Information Center
RAND Corporation, 2015
2015-01-01
The RAND Corporation works throughout the Middle East to analyze complex policy problems and help policymakers create enduring solutions. RAND's work in the Middle East focuses on the issues that drive economic development. This brief report provides an overview of RAND's impact in the Middle East in the areas of supporting youth, health and…
Teaching IS Ethics: Applying a Research Technique for Classroom Use
ERIC Educational Resources Information Center
Niederman, Fred; Taylor, Sallie; Dick, Geoffrey N.; Land, Lesley Pek Wee
2011-01-01
The nature of IS technologies and the range of their appropriate and inappropriate uses continue to evolve and expand. MIS educational programs have a challenge to provide both the appropriate content to introduce students to classic information ethics problems, as well as the methods for analyzing possible actions within a complex realistic…
Student-Teachers' Use of "Google Earth" in Problem-Based Geology Learning
ERIC Educational Resources Information Center
Ratinen, Ilkka; Keinonen, Tuula
2011-01-01
Geographical Information Systems (GIS) are adequate for analyzing complex scientific and spatial phenomena in geography education. "Google Earth" is a geographic information tool for GIS-based learning. It allows students to engage in the lesson, explore the Earth, explain what they identify and evaluate the implications of what they are…
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
Flux-vector splitting algorithm for chain-rule conservation-law form
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Nguyen, H. L.; Willis, E. A.; Steinthorsson, E.; Li, Z.
1991-01-01
A flux-vector splitting algorithm with Newton-Raphson iteration was developed for the 'full compressible' Navier-Stokes equations cast in chain-rule conservation-law form. The algorithm is intended for problems with deforming spatial domains and for problems whose governing equations cannot be cast in strong conservation-law form. The usefulness of the algorithm for such problems was demonstrated by applying it to analyze the unsteady, two- and three-dimensional flows inside one combustion chamber of a Wankel engine under nonfiring conditions. Solutions were obtained to examine the algorithm in terms of conservation error, robustness, and ability to handle complex flows on time-dependent grid systems.
Multifractality and heteroscedastic dynamics: An application to time series analysis
NASA Astrophysics Data System (ADS)
Nascimento, C. M.; Júnior, H. B. N.; Jennings, H. D.; Serva, M.; Gleria, Iram; Viswanathan, G. M.
2008-01-01
An increasingly important problem in physics concerns scale invariance symmetry in diverse complex systems, often characterized by heteroscedastic dynamics. We investigate the nature of the relationship between the heteroscedastic and fractal aspects of the dynamics of complex systems, by analyzing the sensitivity to heteroscedasticity of the scaling properties of weakly nonstationary time series. By using multifractal detrended fluctuation analysis, we study the singularity spectra of currency exchange rate fluctuations, after partially or completely eliminating n-point correlations via data shuffling techniques. We conclude that heteroscedasticity can significantly increase multifractality and interpret these findings in the context of self-organizing and adaptive complex systems.
Requirements Analysis and Modeling with Problem Frames and SysML: A Case Study
NASA Astrophysics Data System (ADS)
Colombo, Pietro; Khendek, Ferhat; Lavazza, Luigi
Requirements analysis based on Problem Frames is getting an increasing attention in the academic community and has the potential to become of relevant interest also for industry. However the approach lacks an adequate notational support and methodological guidelines, and case studies that demonstrate its applicability to problems of realistic complexity are still rare. These weaknesses may hinder its adoption. This paper aims at contributing towards the elimination of these weaknesses. We report on an experience in analyzing and specifying the requirements of a controller for traffic lights of an intersection using Problem Frames in combination with SysML. The analysis was performed by decomposing the problem, addressing the identified sub-problems, and recomposing them while solving the identified interferences. The experience allowed us to identify certain guidelines for decomposition and re-composition patterns.
Causal chain analysis and root causes: the GIWA approach.
Belausteguigoitia, Juan Carlos
2004-02-01
The Global International Waters Assessment (GIWA) was created to help develop a priority setting mechanism for actions in international waters. Apart from assessing the severity of environmental problems in ecosystems, the GIWA's task is to analyze potential policy actions that could solve or mitigate these problems. Given the complex nature of the problems, understanding their root causes is essential to develop effective solutions. The GIWA provides a framework to analyze these causes, which is based on identifying the factors that shape human behavior in relation to the use (direct or indirect) of aquatic resources. Two sets of factors are analyzed. The first one consists of social coordination mechanisms (institutions). Faults in these mechanisms lead to wasteful use of resources. The second consists of factors that do not cause wasteful use of resources per se (poverty, trade, demographic growth, technology), but expose and magnify the faults of the first group of factors. The picture that comes out is that diagnosing simple generic causes, e.g. poverty or trade, without analyzing the case specific ways in which the root causes act and interact to degrade the environment, will likely ignore important links that may put the effectiveness of the recommended policies at risk. A summary of the causal chain analysis for the Colorado River Delta is provided as an example.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Pates, Carl S., III
1994-01-01
A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.
Breaking into the Hebrew Verb System: A Learning Problem
ERIC Educational Resources Information Center
Ashkenazi, Orit; Ravid, Dorit; Gillis, Steven
2016-01-01
Verb learning is an important part of linguistic acquisition. The present study examines the early phases of verb acquisition in Hebrew, a language with complex derivational and inflectional verb morphology, analyzing verbs in dense recordings of CDS and CS of two Hebrew-speaking parent-child dyads aged 1;8-2;2. The goal was to pinpoint those cues…
ERIC Educational Resources Information Center
Morin, Olivier; Simonneaux, Laurence; Simmoneaux, Jean; Tytler, Russell; Barraza, Laura
2014-01-01
Within the increasing body of research that examines students' reasoning on socioscientific issues, we consider in particular student reasoning concerning acute, open-ended questions that bring out the complexities and uncertainties embedded in ill-structured problems. In this paper, we propose a socioscientific sustainability reasoning…
ERIC Educational Resources Information Center
Mirel, Barbara
2001-01-01
Conducts a scenario-based usability test with 10 data analysts using visual querying (visually analyzing data with interactive graphics). Details a range of difficulties found in visual selection that, at times, gave rise to inaccurate selections, invalid conclusions, and misguided decisions. Argues that support for visual selection must be built…
ERIC Educational Resources Information Center
1989
Egypt is having a very difficult time supplying enough food to meet the demands of its swelling population. This complex problem is analyzed and discussed from a variety of perspectives. Factors such as foreign aid, family planning, agriculture, demographics, and religious and social culture are examined, as well as more specific issues such as…
Is Relational Reasoning Dependent on Language? A Voxel-Based Lesion Symptom Mapping Study
ERIC Educational Resources Information Center
Baldo, Juliana V.; Bunge, Silvia A.; Wilson, Stephen M.; Dronkers, Nina F.
2010-01-01
Previous studies with brain-injured patients have suggested that language abilities are necessary for complex problem-solving, even when tasks are non-verbal. In the current study, we tested this notion by analyzing behavioral and neuroimaging data from a large group of left-hemisphere stroke patients (n = 107) suffering from a range of language…
ERIC Educational Resources Information Center
Belzer, Alisa; Pickard, Amy
2015-01-01
This research synthesis analyzed qualitative depictions of adult literacy learners and identified five ways in which they are typically characterized: the Heroic Victim, the Needy (Problem) Child, the Broken (but Repairable) Cog, the Pawn of Destiny, and the Capable Comrade. These types do not capture the diversity or complexity of all adult…
After "DeFunis": Affirmative Action and the Jewish Community. Analysis, No. 46.
ERIC Educational Resources Information Center
Frank, Steven
The problems raised by the development of affirmative action and by the Jewish community's response to the complex social and legal issue are analyzed. The analysis focuses upon: initiation of affirmative action by presidential decree and its interpretation and implementation by the Department of Health, Education, and Welfare in the areas of…
Research on air and missile defense task allocation based on extended contract net protocol
NASA Astrophysics Data System (ADS)
Zhang, Yunzhi; Wang, Gang
2017-10-01
Based on the background of air and missile defense distributed element corporative engagement, the interception task allocation problem of multiple weapon units with multiple targets under network condition is analyzed. Firstly, a mathematical model of task allocation is established by combat task decomposition. Secondly, the initialization assignment based on auction contract and the adjustment allocation scheme based on swap contract were introduced to the task allocation. Finally, through the simulation calculation of typical situation, the model can be used to solve the task allocation problem in complex combat environment.
A novel numerical framework for self-similarity in plasticity: Wedge indentation in single crystals
NASA Astrophysics Data System (ADS)
Juul, K. J.; Niordson, C. F.; Nielsen, K. L.; Kysar, J. W.
2018-03-01
A novel numerical framework for analyzing self-similar problems in plasticity is developed and demonstrated. Self-similar problems of this kind include processes such as stationary cracks, void growth, indentation etc. The proposed technique offers a simple and efficient method for handling this class of complex problems by avoiding issues related to traditional Lagrangian procedures. Moreover, the proposed technique allows for focusing the mesh in the region of interest. In the present paper, the technique is exploited to analyze the well-known wedge indentation problem of an elastic-viscoplastic single crystal. However, the framework may be readily adapted to any constitutive law of interest. The main focus herein is the development of the self-similar framework, while the indentation study serves primarily as verification of the technique by comparing to existing numerical and analytical studies. In this study, the three most common metal crystal structures will be investigated, namely the face-centered cubic (FCC), body-centered cubic (BCC), and hexagonal close packed (HCP) crystal structures, where the stress and slip rate fields around the moving contact point singularity are presented.
Building Blocks for Reliable Complex Nonlinear Numerical Simulations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi N. (Technical Monitor)
2002-01-01
This talk describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.
Building Blocks for Reliable Complex Nonlinear Numerical Simulations
NASA Technical Reports Server (NTRS)
Yee, H. C.
2005-01-01
This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations.
Building Blocks for Reliable Complex Nonlinear Numerical Simulations. Chapter 2
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.
Asymptotic behavior of solutions of the renormalization group K-epsilon turbulence model
NASA Technical Reports Server (NTRS)
Yakhot, A.; Staroselsky, I.; Orszag, S. A.
1994-01-01
Presently, the only efficient way to calculate turbulent flows in complex geometries of engineering interest is to use Reynolds-average Navier-Stokes (RANS) equations. As compared to the original Navier-Stokes problem, these RANS equations posses much more complicated nonlinear structure and may exhibit far more complex nonlinear behavior. In certain cases, the asymptotic behavior of such models can be studied analytically which, aside from being an interesting fundamental problem, is important for better understanding of the internal structure of the models as well as to improve their performances. The renormalization group (RNG) K-epsilon turbulence model, derived directly from the incompresible Navier-Stokes equations, is analyzed. It has already been used to calculate a variety of turbulent and transitional flows in complex geometries. For large values of the RNG viscosity parameter, the model may exhibit singular behavior. In the form of the RNG K-epsilon model that avoids the use of explicit wall functions, a = 1, so the RNG viscosity parameter must be smaller than 23.62 to avoid singularities.
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention
Fisher, Brian; Smith, Jennifer; Pike, Ian
2017-01-01
Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications. PMID:28895928
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention.
Al-Hajj, Samar; Fisher, Brian; Smith, Jennifer; Pike, Ian
2017-09-12
Background : Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods : Inspired by the Delphi method, we introduced a novel methodology-group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders' observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results : The GA methodology triggered the emergence of ' common g round ' among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders' verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusion s : Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ' common ground' among diverse stakeholders about health data and their implications.
NASA Astrophysics Data System (ADS)
Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg
2015-05-01
In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
Application of the GERTS II simulator in the industrial environment.
NASA Technical Reports Server (NTRS)
Whitehouse, G. E.; Klein, K. I.
1971-01-01
GERT was originally developed to aid in the analysis of stochastic networks. GERT can be used to graphically model and analyze complex systems. Recently a simulator model, GERTS II, has been developed to solve GERT Networks. The simulator language used in the development of this model was GASP II A. This paper discusses the possible application of GERTS II to model and analyze (1) assembly line operations, (2) project management networks, (3) conveyor systems and (4) inventory systems. Finally, an actual application dealing with a job shop loading problem is presented.
Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube
NASA Astrophysics Data System (ADS)
Zhou, Guangzhao; Xu, Kun; Liu, Feng
2018-01-01
The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Iatrogenics in Orthodontics and its challenges.
Barreto, Gustavo Mattos; Feitosa, Henrique Oliveira
2016-01-01
Orthodontics has gone through remarkable advances for those who practice it with dignity and clinical quality, such as the unprecedented number of patients treated of some type of iatrogenic problems (post-treatment root resorptions; occlusal plane changes; midline discrepancies, asymmetries, etc). Several questions may raise useful reflections about the constant increase of iatrogenics. What is causing it? Does it occur when dentists are properly trained? In legal terms, how can dentists accept these patients? How should they be orthodontically treated? What are the most common problems? This study analyzed and discussed relevant aspects to understand patients with iatrogenic problems and describe a simple and efficient approach to treat complex cases associated with orthodontic iatrogenics.
Iatrogenics in Orthodontics and its challenges
Barreto, Gustavo Mattos; Feitosa, Henrique Oliveira
2016-01-01
ABSTRACT Introduction: Orthodontics has gone through remarkable advances for those who practice it with dignity and clinical quality, such as the unprecedented number of patients treated of some type of iatrogenic problems (post-treatment root resorptions; occlusal plane changes; midline discrepancies, asymmetries, etc). Several questions may raise useful reflections about the constant increase of iatrogenics. What is causing it? Does it occur when dentists are properly trained? In legal terms, how can dentists accept these patients? How should they be orthodontically treated? What are the most common problems? Objective: This study analyzed and discussed relevant aspects to understand patients with iatrogenic problems and describe a simple and efficient approach to treat complex cases associated with orthodontic iatrogenics. PMID:27901237
NASA Astrophysics Data System (ADS)
Aji Hapsoro, Cahyo; Purqon, Acep; Srigutomo, Wahyu
2017-07-01
2-D Time Domain Electromagnetic (TDEM) has been successfully conducted to illustrate the value of Electric field distribution under the Earth surface. Electric field compared by magnetic field is used to analyze resistivity and resistivity is one of physical properties which very important to determine the reservoir potential area of geothermal systems as one of renewable energy. In this modeling we used Time Domain Electromagnetic method because it can solve EM field interaction problem with complex geometry and to analyze transient problems. TDEM methods used to model the value of electric and magnetic fields as a function of the time combined with the function of distance and depth. The result of this modeling is Electric field intensity value which is capable to describe the structure of the Earth’s subsurface. The result of this modeling can be applied to describe the Earths subsurface resistivity values to determine the reservoir potential of geothermal systems.
ERIC Educational Resources Information Center
Cai, Jinfa, And Others
1996-01-01
Presents a conceptual framework for analyzing students' mathematical understanding, reasoning, problem solving, and communication. Analyses of student responses indicated that the tasks appear to measure the complex thinking and reasoning processes that they were designed to assess. Concludes that the QUASAR assessment tasks can capture changes in…
Risk Management using Dependency Stucture Matrix
NASA Astrophysics Data System (ADS)
Petković, Ivan
2011-09-01
An efficient method based on dependency structure matrix (DSM) analysis is given for ranking risks in a complex system or process whose entities are mutually dependent. This rank is determined according to the element's values of the unique positive eigenvector which corresponds to the matrix spectral radius modeling the considered engineering system. For demonstration, the risk problem of NASA's robotic spacecraft is analyzed.
Natural Language as a Tool for Analyzing the Proving Process: The Case of Plane Geometry Proof
ERIC Educational Resources Information Center
Robotti, Elisabetta
2012-01-01
In the field of human cognition, language plays a special role that is connected directly to thinking and mental development (e.g., Vygotsky, "1938"). Thanks to "verbal thought", language allows humans to go beyond the limits of immediately perceived information, to form concepts and solve complex problems (Luria, "1975"). So, it appears language…
The Detection Method of Fire Abnormal Based on Directional Drilling in Complex Conditions of Mine
NASA Astrophysics Data System (ADS)
Huijun, Duan; Shijun, Hao; Jie, Feng
2018-06-01
In the light of more and more urgent hidden fire abnormal detection problem in complex conditions of mine, a method which is used directional drilling technology is put forward. The method can avoid the obstacles in mine, and complete the fire abnormal detection. This paper based on analyzing the trajectory control of directional drilling, measurement while drilling and the characteristic of open branch process, the project of the directional drilling is formulated combination with a complex condition mine, and the detection of fire abnormal is implemented. This method can provide technical support for fire prevention, which also can provide a new way for fire anomaly detection in the similar mine.
Determining biosonar images using sparse representations.
Fontaine, Bertrand; Peremans, Herbert
2009-05-01
Echolocating bats are thought to be able to create an image of their environment by emitting pulses and analyzing the reflected echoes. In this paper, the theory of sparse representations and its more recent further development into compressed sensing are applied to this biosonar image formation task. Considering the target image representation as sparse allows formulation of this inverse problem as a convex optimization problem for which well defined and efficient solution methods have been established. The resulting technique, referred to as L1-minimization, is applied to simulated data to analyze its performance relative to delay accuracy and delay resolution experiments. This method performs comparably to the coherent receiver for the delay accuracy experiments, is quite robust to noise, and can reconstruct complex target impulse responses as generated by many closely spaced reflectors with different reflection strengths. This same technique, in addition to reconstructing biosonar target images, can be used to simultaneously localize these complex targets by interpreting location cues induced by the bat's head related transfer function. Finally, a tentative explanation is proposed for specific bat behavioral experiments in terms of the properties of target images as reconstructed by the L1-minimization method.
The emerging problem of physical child abuse in South Korea.
Hahm, H C; Guterman, N B
2001-05-01
South Korea has had remarkably high incidence and prevalence rates of physical violence against children, yet the problem has received only limited public and professional attention until very recently. This article represents the first attempt in English to systematically analyze South Korea's recent epidemiological studies on child maltreatment. Discussed are sociocultural factors that have contributed both to delays in child protection laws and a low public awareness of the problem of child abuse. The article highlights methodological issues concerning the definition of physical abuse in South Korea and the complex attitudes toward violence. It also examines the role of the Korean women's movement in the reform of family laws and the recent establishment of new child protection legislation. Suggestions for future directions for the problem of child maltreatment within South Korea are presented.
Effect of interfacial stresses in an elastic body with a nanoinclusion
NASA Astrophysics Data System (ADS)
Vakaeva, Aleksandra B.; Grekov, Mikhail A.
2018-05-01
The 2-D problem of an infinite elastic solid with a nanoinclusion of a different from circular shape is solved. The interfacial stresses are acting at the interface. Contact of the inclusion with the matrix satisfies the ideal conditions of cohesion. The generalized Laplace - Young law defines conditions at the interface. To solve the problem, Gurtin - Murdoch surface elasticity model, Goursat - Kolosov complex potentials and the boundary perturbation method are used. The problem is reduced to the solution of two independent Riemann - Hilbert's boundary problems. For the circular inclusion, hypersingular integral equation in an unknown interfacial stress is derived. The algorithm of solving this equation is constructed. The influence of the interfacial stress and the dimension of the circular inclusion on the stress distribution and stress concentration at the interface are analyzed.
Resonant transition-based quantum computation
NASA Astrophysics Data System (ADS)
Chiang, Chen-Fu; Hsieh, Chang-Yu
2017-05-01
In this article we assess a novel quantum computation paradigm based on the resonant transition (RT) phenomenon commonly associated with atomic and molecular systems. We thoroughly analyze the intimate connections between the RT-based quantum computation and the well-established adiabatic quantum computation (AQC). Both quantum computing frameworks encode solutions to computational problems in the spectral properties of a Hamiltonian and rely on the quantum dynamics to obtain the desired output state. We discuss how one can adapt any adiabatic quantum algorithm to a corresponding RT version and the two approaches are limited by different aspects of Hamiltonians' spectra. The RT approach provides a compelling alternative to the AQC under various circumstances. To better illustrate the usefulness of the novel framework, we analyze the time complexity of an algorithm for 3-SAT problems and discuss straightforward methods to fine tune its efficiency.
Simulation Study on Missile Penetration Based on LS - DYNA
NASA Astrophysics Data System (ADS)
Tang, Jue; Sun, Xinli
2017-12-01
Penetrating the shell armor is an effective means of destroying hard targets with multiple layers of protection. The penetration process is a high-speed impact dynamics research category, involving high pressure, high temperature, high speed and internal material damage, including plugging, penetration, spalling, caving, splashing and other complex forms, therefore, Analysis is one of the difficulties in the study of impact dynamics. In this paper, the Lagrang algorithm and the SPH algorithm are used to analyze the penetrating steel plate, and the penetration model of the rocket penetrating the steel plate, the failure mode of the steel plate and the missile and the advantages and disadvantages of Lagrang algorithm and SPH algorithm in the simulation of high-speed collision problem are analyzed and compared, which provides a reference for the study of simulation collision problem.
Problem-gambling severity, suicidality and DSM-IV Axis II personality disorders.
Ronzitti, Silvia; Kraus, Shane W; Hoff, Rani A; Clerici, Massimo; Potenza, Marc N
2018-07-01
Despite the strong associations between personality disorders and problem/pathological gambling, few studies have investigated the relationships between personality disorders, problem-gambling severity and suicidal thoughts/behaviors. We examined the relationships between problem-gambling severity and personality disorders among individuals with differing levels of suicidality (none, thoughts alone, attempts). We analyzed data from 13,543 participants of the National Epidemiologic Survey of Alcohol and Related Conditions (NESARC) study. First, differences in sociodemographic characteristics and prevalence of personality disorders were analyzed according to problem-gambling severity and suicidality status. Second, we performed a logistic regression to assess among the relationship between problem-gambling severity and DSM-IV Axis II psychopathology according to suicidality level. At-risk or problem/pathological gambling groups showed higher rates of a wide range of personality disorders compared to non-gamblers. Logistic regression showed that at-risk pathological gamblers had a higher odds ratio for any personality disorder in the group with no history of suicidality, particularly for cluster-B personality disorders. Odds ratio interaction analysis identified the relationship between problem-gambling severity and personality disorders, particularly those in cluster B, differ according to suicidality status. Our findings suggest a complex relationship between suicidality, problem-gambling severity and personality disorders. The stronger relationship between problem-gambling severity and personality disorders in people with no suicidality as compared to some suicidality suggests that some of the relationship between greater problem-gambling severity and Axis II psychopathology is accounted for by increased suicidality. The findings have implications for clinical interventions targeting suicidality in individuals with gambling disorders. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Chen, Zhongzhou; Demirci, Neset; Choi, Youn-Jeng; Pritchard, David E.
2017-06-01
Previous research on problem diagrams suggested that including a supportive diagram, one that does not provide necessary problem solving information, may bring little, or even negative, benefit to students' problem solving success. We tested the usefulness of problem diagrams on 12 different physics problems (6A/B experiments) in our massive open online course. By analyzing over 8000 student responses in total, we found that including a problem diagram that contains no significant additional information only slightly improves the first attempt correct rate for the few most spatially complex problems, and has little impact on either the final correct percentage or the time spent on solving the problem. On the other hand, in half of the cases, removing the diagram significantly increased the fraction of students' drawing their own diagrams during problem solving. The increase in drawing behavior is largely independent of students' physics abilities. In summary, our results suggest that for many physics problems, the benefit of a diagram is exceedingly small and may not justify the effort of creating one.
Supplier selection based on complex indicator of finished products quality
NASA Astrophysics Data System (ADS)
Chernikova, Anna; Golovkina, Svetlana; Kuzmina, Svetlana; Demenchenok, Tatiana
2017-10-01
In the article the authors consider possible directions of solving problems when selecting a supplier for deliveries of raw materials and materials of an industrial enterprise, possible difficulties are analyzed and ways of their solution are suggested. Various methods are considered to improve the efficiency of the supplier selection process based on the analysis of the paper bags supplier selection process for the needs of the construction company. In the article the calculation of generalized indicators and complex indicator, which should include single indicators, formed in groups that reflect different aspects of quality, is presented.
Identification of complex flows in Taylor-Couette counter-rotating cavities
NASA Technical Reports Server (NTRS)
Czarny, O.; Serre, E.; Bontoux, P.; Lueptow, R. M.
2001-01-01
The transition in confined rotating flows is a topical problem with many industrial and fundamental applications. The purpose of this study is to investigate the Taylor-Couette flow in a finite-length cavity with counter-rotating walls, for two aspect ratios L=5 or L=6. Two complex regimes of wavy vortex and spirals are emphasized for the first time via direct numerical simulation, by using a three-dimensional spectral method. The spatio-temporal behavior of the solutions is analyzed and compared to the few data actually available. c2001 Academie des sciences/Editions scientifiques et medicales Elsevier SAS.
Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions
2017-01-01
Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time. PMID:28118384
Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions.
Guerrero, Jose; Oliver, Gabriel; Valero, Oscar
2017-01-01
Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time.
NASA Astrophysics Data System (ADS)
Podschuweit, Sören; Bernholt, Sascha; Brückmann, Maja
2016-05-01
Background: Complexity models have provided a suitable framework in various domains to assess students' educational achievement. Complexity is often used as the analytical focus when regarding learning outcomes, i.e. when analyzing written tests or problem-centered interviews. Numerous studies reveal negative correlations between the complexity of a task and the probability of a student solving it. Purpose: Thus far, few detailed investigations explore the importance of complexity in actual classroom lessons. Moreover, the few efforts made so far revealed inconsistencies. Hence, the present study sheds light on the influence the complexity of students' and teachers' class contributions have on students' learning outcomes. Sample: Videos of 10 German 8th grade physics courses covering three consecutive lessons on two topics each (electricity, mechanics) have been analyzed. The sample includes 10 teachers and 290 students. Design and methods: Students' and teachers' verbal contributions were coded manual-based according to the level of complexity. Additionally, pre-post testing of knowledge in electricity and mechanics was applied to assess the students' learning gain. ANOVA analysis was used to characterize the influence of the complexity on the learning gain. Results: Results indicate that the mean level of complexity in classroom contributions explains a large portion of variance in post-test results on class level. Despite this overarching trend, taking classroom activities into account as well reveals even more fine-grained patterns, leading to more specific relations between the complexity in the classroom and students' achievement. Conclusions: In conclusion, we argue for more reflected teaching approaches intended to gradually increase class complexity to foster students' level of competency.
Tschentscher, Nadja; Hauk, Olaf
2015-01-01
Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants' strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research.
Tschentscher, Nadja; Hauk, Olaf
2015-01-01
Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants’ strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research. PMID:26321997
Aspects of the Two Language System and Three Language Problem in the Changing Society of Hong Kong.
ERIC Educational Resources Information Center
Tsou, Benjamin K.
1996-01-01
Presents details of the language shifts among the various sections of the Chinese-speaking population in Hong Kong and analyzes patterns of allegiance. Notes that complex social, economic, and political pressures will affect future language in Hong Kong and that, within the domains of family, work, and others, the use of Modern Standard Chinese is…
Development of a Comprehensive Digital Avionics Curriculum for the Aeronautical Engineer
2006-03-01
able to analyze and design aircraft and missile guidance and control systems, including feedback stabilization schemes and stochastic processes, using ...Uncertainty modeling for robust control; Robust closed-loop stability and performance; Robust H- infinity control; Robustness check using mu-analysis...Controlled feedback (reduces noise) 3. Statistical group response (reduce pressure toward conformity) When used as a tool to study a complex problem
Basic investigation of turbine erosion phenomena
NASA Technical Reports Server (NTRS)
Pouchot, W. D.; Kothmann, R. E.; Fentress, W. K.; Heymann, F. J.; Varljen, T. C.; Chi, J. W. H.; Milton, J. D.; Glassmire, C. M.; Kyslinger, J. A.; Desai, K. A.
1971-01-01
An analytical-empirical model is presented of turbine erosion that fits and explains experience in both steam and metal vapor turbines. Because of the complexities involved in analyzing turbine problems, in a pure scientific sense, it is obvious that this goal can be only partially realized. Therefore, emphasis is placed on providing a useful model for preliminary erosion estimates for given configurations, fluids, and flow conditions.
ERIC Educational Resources Information Center
Munoz-Organero, Mario; Ramirez, Gustavo A.; Merino, Pedro Munoz; Kloos, Carlos Delgado
2010-01-01
The use of swarm intelligence techniques in e-learning scenarios provides a way to combine simple interactions of individual students to solve a more complex problem. After getting some data from the interactions of the first students with a central system, the use of these techniques converges to a solution that the rest of the students can…
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
2017-10-13
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.
Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J
2017-12-01
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.
NASA Astrophysics Data System (ADS)
Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.
2013-10-01
The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].
NASA Astrophysics Data System (ADS)
Agranovich, Daniel; Polygalov, Eugene; Popov, Ivan; Ben Ishai, Paul; Feldman, Yuri
2017-03-01
One of the approaches to bypass the problem of electrode polarization in dielectric measurements is the free electrode method. The advantage of this technique is that, the probing electric field in the material is not supplied by contact electrodes, but rather by electromagnetic induction. We have designed an inductive dielectric analyzer based on a sensor comprising two concentric toroidal coils. In this work, we present an analytic derivation of the relationship between the impedance measured by the sensor and the complex dielectric permittivity of the sample. The obtained relationship was successfully employed to measure the dielectric permittivity and conductivity of various alcohols and aqueous salt solutions.
Method for Evaluating Information to Solve Problems of Control, Monitoring and Diagnostics
NASA Astrophysics Data System (ADS)
Vasil'ev, V. A.; Dobrynina, N. V.
2017-06-01
The article describes a method for evaluating information to solve problems of control, monitoring and diagnostics. It is necessary for reducing the dimensionality of informational indicators of situations, bringing them to relative units, for calculating generalized information indicators on their basis, ranking them by characteristic levels, for calculating the efficiency criterion of a system functioning in real time. The design of information evaluation system has been developed on its basis that allows analyzing, processing and assessing information about the object. Such object can be a complex technical, economic and social system. The method and the based system thereof can find a wide application in the field of analysis, processing and evaluation of information on the functioning of the systems, regardless of their purpose, goals, tasks and complexity. For example, they can be used to assess the innovation capacities of industrial enterprises and management decisions.
NASA Astrophysics Data System (ADS)
Nasr, Mamdouh H.; Othman, Mohamed A. K.; Eshrah, Islam A.; Abuelfadl, Tamer M.
2017-04-01
New developments in the eigenmode projection technique (EPT) are introduced in solving problems of electromagnetic resonance in closed cavities as well as scattering from discontinuities in guided-wave structures. The EPT invokes the eigenmodes of a canonical predefined cavity in the solution procedure and uses the expansion of these eigenmodes to solve Maxwell's equations, in conjunction with a convenient choice of port boundary conditions. For closed cavities, a new spurious-mode separation method is developed, showing robust and efficient spurious-mode separation. This has been tested using more complex and practical examples demonstrating the powerful use of the presented approach. For waveguide scattering problems, convergence studies are being performed showing stable solutions for a relatively small number of expansion modes, and the proposed method has advantages over conventional solvers in analyzing electromagnetic problems with inhomogeneous materials. These convergence studies also lead to an efficient rule-of-thumb for the number of modes to be used in the simulation. The ability to handle closed and open structures is presented in a unified framework that highlights the generality of the EPT which could be used to analyze and design a variety of microwave components.
A survey of automated methods for sensemaking support
NASA Astrophysics Data System (ADS)
Llinas, James
2014-05-01
Complex, dynamic problems in general present a challenge for the design of analysis support systems and tools largely because there is limited reliable a priori procedural knowledge descriptive of the dynamic processes in the environment. Problem domains that are non-cooperative or adversarial impute added difficulties involving suboptimal observational data and/or data containing the effects of deception or covertness. The fundamental nature of analysis in these environments is based on composite approaches involving mining or foraging over the evidence, discovery and learning processes, and the synthesis of fragmented hypotheses; together, these can be labeled as sensemaking procedures. This paper reviews and analyzes the features, benefits, and limitations of a variety of automated techniques that offer possible support to sensemaking processes in these problem domains.
Computational problems and signal processing in SETI
NASA Technical Reports Server (NTRS)
Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard
1991-01-01
The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.
Investigation of model-based physical design restrictions (Invited Paper)
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl
2005-05-01
As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
Technical Parameters Modeling of a Gas Probe Foaming Using an Active Experimental Type Research
NASA Astrophysics Data System (ADS)
Tîtu, A. M.; Sandu, A. V.; Pop, A. B.; Ceocea, C.; Tîtu, S.
2018-06-01
The present paper deals with a current and complex topic, namely - a technical problem solving regarding the modeling and then optimization of some technical parameters related to the natural gas extraction process. The study subject is to optimize the gas probe sputtering using experimental research methods and data processing by regular probe intervention with different sputtering agents. This procedure makes that the hydrostatic pressure to be reduced by the foam formation from the water deposit and the scrubbing agent which can be removed from the surface by the produced gas flow. The probe production data was analyzed and the so-called candidate for the research itself emerged. This is an extremely complex study and it was carried out on the field works, finding that due to the severe gas field depletion the wells flow decreases and the start of their loading with deposit water, was registered. It was required the regular wells foaming, to optimize the daily production flow and the disposal of the wellbore accumulated water. In order to analyze the process of natural gas production, the factorial experiment and other methods were used. The reason of this choice is that the method can offer very good research results with a small number of experimental data. Finally, through this study the extraction process problems were identified by analyzing and optimizing the technical parameters, which led to a quality improvement of the extraction process.
Leon, Juan S; Winskell, Kate; McFarland, Deborah A; del Rio, Carlos
2015-03-01
Global health is a dynamic, emerging, and interdisciplinary field. To address current and emerging global health challenges, we need a public health workforce with adaptable and collaborative problem-solving skills. In the 2013-2014 academic year, the Hubert Department of Global Health at the Rollins School of Public Health-Emory University launched an innovative required core course for its first-year Master of Public Health students in the global health track. The course uses a case-based, problem-based learning approach to develop global health competencies. Small teams of students propose solutions to these problems by identifying learning issues and critically analyzing and synthesizing new information. We describe the course structure and logistics used to apply this approach in the context of a large class and share lessons learned.
Interdisciplinary Matchmaking: Choosing Collaborators by Skill, Acquaintance and Trust
NASA Astrophysics Data System (ADS)
Hupa, Albert; Rzadca, Krzysztof; Wierzbicki, Adam; Datta, Anwitaman
Social networks are commonly used to enhance recommender systems. Most of such systems recommend a single resource or a person. However, complex problems or projects usually require a team of experts that must work together on a solution. Team recommendation is much more challenging, mostly because of the complex interpersonal relations between members. This chapter presents fundamental concepts on how to score a team based on members' social context and their suitability for a particular project. We represent the social context of an individual as a three-dimensional social network (3DSN) composed of a knowledge dimension expressing skills, a trust dimension and an acquaintance dimension. Dimensions of a 3DSN are used to mathematically formalize the criteria for prediction of the team's performance. We use these criteria to formulate the team recommendation problem as a multi-criteria optimization problem. We demonstrate our approach on empirical data crawled from two web2.0 sites:
Metabolic Compartmentation – A System Level Property of Muscle Cells
Saks, Valdur; Beraud, Nathalie; Wallimann, Theo
2008-01-01
Problems of quantitative investigation of intracellular diffusion and compartmentation of metabolites are analyzed. Principal controversies in recently published analyses of these problems for the living cells are discussed. It is shown that the formal theoretical analysis of diffusion of metabolites based on Fick's equation and using fixed diffusion coefficients for diluted homogenous aqueous solutions, but applied for biological systems in vivo without any comparison with experimental results, may lead to misleading conclusions, which are contradictory to most biological observations. However, if the same theoretical methods are used for analysis of actual experimental data, the apparent diffusion constants obtained are orders of magnitude lower than those in diluted aqueous solutions. Thus, it can be concluded that local restrictions of diffusion of metabolites in a cell are a system-level properties caused by complex structural organization of the cells, macromolecular crowding, cytoskeletal networks and organization of metabolic pathways into multienzyme complexes and metabolons. This results in microcompartmentation of metabolites, their channeling between enzymes and in modular organization of cellular metabolic networks. The perspectives of further studies of these complex intracellular interactions in the framework of Systems Biology are discussed. PMID:19325782
Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I
2011-09-26
A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.
Dependency visualization for complex system understanding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smart, J. Allison Cory
1994-09-01
With the volume of software in production use dramatically increasing, the importance of software maintenance has become strikingly apparent. Techniques now sought and developed for reverse engineering and design extraction and recovery. At present, numerous commercial products and research tools exist which are capable of visualizing a variety of programming languages and software constructs. The list of new tools and services continues to grow rapidly. Although the scope of the existing commercial and academic product set is quite broad, these tools still share a common underlying problem. The ability of each tool to visually organize object representations is increasingly impairedmore » as the number of components and component dependencies within systems increases. Regardless of how objects are defined, complex ``spaghetti`` networks result in nearly all large system cases. While this problem is immediately apparent in modem systems analysis involving large software implementations, it is not new. As will be discussed in Chapter 2, related problems involving the theory of graphs were identified long ago. This important theoretical foundation provides a useful vehicle for representing and analyzing complex system structures. While the utility of directed graph based concepts in software tool design has been demonstrated in literature, these tools still lack the capabilities necessary for large system comprehension. This foundation must therefore be expanded with new organizational and visualization constructs necessary to meet this challenge. This dissertation addresses this need by constructing a conceptual model and a set of methods for interactively exploring, organizing, and understanding the structure of complex software systems.« less
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
Direct mass spectrometry approaches to characterize polyphenol composition of complex samples.
Fulcrand, Hélène; Mané, Carine; Preys, Sébastien; Mazerolles, Gérard; Bouchut, Claire; Mazauric, Jean-Paul; Souquet, Jean-Marc; Meudec, Emmanuelle; Li, Yan; Cole, Richard B; Cheynier, Véronique
2008-12-01
Lower molecular weight polyphenols including proanthocyanidin oligomers can be analyzed after HPLC separation on either reversed-phase or normal phase columns. However, these techniques are time consuming and can have poor resolution as polymer chain length and structural diversity increase. The detection of higher molecular weight compounds, as well as the determination of molecular weight distributions, remain major challenges in polyphenol analysis. Approaches based on direct mass spectrometry (MS) analysis that are proposed to help overcome these problems are reviewed. Thus, direct flow injection electrospray ionization mass spectrometry analysis can be used to establish polyphenol fingerprints of complex extracts such as in wine. This technique enabled discrimination of samples on the basis of their phenolic (i.e. anthocyanin, phenolic acid and flavan-3-ol) compositions, but larger oligomers and polymers were poorly detectable. Detection of higher molecular weight proanthocyanidins was also restricted with matrix-assisted laser desorption ionization (MALDI) MS, suggesting that they are difficult to desorb as gas-phase ions. The mass distribution of polymeric fractions could, however, be determined by analyzing the mass distributions of bovine serum albumin/proanthocyanidin complexes using MALDI-TOF-MS.
Image understanding in terms of semiotics
NASA Astrophysics Data System (ADS)
Zakharko, E.; Kaminsky, Roman M.; Shpytko, V.
1995-06-01
Human perception of pictorial visual information is investigated from iconical sign view-point and appropriate semiotical model is discussed. Image construction (syntactics) is analyzed as a complex hierarchical system and various types of pictorial objects, their relations, regular configurations are represented, studied, and modeled. Relations between image syntactics, its semantics, and pragmatics is investigated. Research results application to the problems of thematic interpretation of Earth surface remote imgages is illustrated.
2012-01-01
The mismatch, it was feared, would wreck the processes that the ERP was trying to improve; customers did not have the choice of putting the ERP...program features ahead of attempting a deep dive into the data looking for problems. An initial conceptual framework would allow a decision- maker to
Learning to Predict Social Influence in Complex Networks
2012-03-29
03/2010 – 17/03/2012 Abstract: First, we addressed the problem of analyzing information diffusion process in a social network using two kinds...algorithm which avoids the inner loop optimization during the search. We tested the performance using the structures of four real world networks, and...result of information diffusion that starts from the node. 2 We use “infected” and “activated” interchangeably. Efficient Discovery of Influential
Deciphering the Mechanism of Alternative Cleavage and Polyadenylation in Mantle Cell Lymphoma (MCL)
2013-10-01
also has human firefly luciferase cloned within the same reporter system allowing for intra-plasmid normalization of transfection eliminating problems...collaboration with Dr. Wei Li, a Bioinformaticist from Baylor College of Medicine whose lab specializes in developing complex algorithms to analyze genome...wide sequencing data. Dr. Wei Li and his postdoctoral fellow, Dr. Zheng Xia developed a customized algorithm that is able to detect and quantify
NASA Astrophysics Data System (ADS)
Mashkov, O. A.; Samborskiy, I. I.
2009-10-01
A bundle of papers dealing with functionally stable systems requires the necessity of analyzing of obtained results and their understanding in a general context of cybernetic's development and applications. Description of this field of science, main results and perspectives of the new theory of functionally stability of dynamical systems concerning the problem of remote-piloted aircrafts engineering using pseudosatellite technologies are proposed in the paper.
NASA Astrophysics Data System (ADS)
Chung-Wei, Li; Gwo-Hshiung, Tzeng
To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.
Effect of rich-club on diffusion in complex networks
NASA Astrophysics Data System (ADS)
Berahmand, Kamal; Samadi, Negin; Sheikholeslami, Seyed Mahmood
2018-05-01
One of the main issues in complex networks is the phenomenon of diffusion in which the goal is to find the nodes with the highest diffusing power. In diffusion, there is always a conflict between accuracy and efficiency time complexity; therefore, most of the recent studies have focused on finding new centralities to solve this problem and have offered new ones, but our approach is different. Using one of the complex networks’ features, namely the “rich-club”, its effect on diffusion in complex networks has been analyzed and it is demonstrated that in datasets which have a high rich-club, it is better to use the degree centrality for finding influential nodes because it has a linear time complexity and uses the local information; however, this rule does not apply to datasets which have a low rich-club. Next, real and artificial datasets with the high rich-club have been used in which degree centrality has been compared to famous centrality using the SIR standard.
The design of multiplayer online video game systems
NASA Astrophysics Data System (ADS)
Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.
2003-11-01
The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.
1987-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.
[Terrorism and mental health (problem's scale, population tolerance, management of care)].
Iastrebov, V S
2004-01-01
The consequences of terrorist threat and terrorist acts for mental health of the individual, groups of individuals and community in general are analyzed. Mental disorders emerging in the victims of terrorism is described. The problem of terrorist threats use as a psychic weapon is discussed. Tolerance of population to terrorism can be divided into two types--psychophysiological and socio-psychological. The ways for elevating tolerability to terrorist threat and terrorist acts are suggested. Help in the centers of terrorist act must be of the complex character, being provided by different specialists including psychologists and psychiatrists. The importance of state structures and community support in this work is emphasized.
Iancu, Ovidiu D; Darakjian, Priscila; Kawane, Sunita; Bottomly, Daniel; Hitzemann, Robert; McWeeney, Shannon
2012-01-01
Complex Mus musculus crosses, e.g., heterogeneous stock (HS), provide increased resolution for quantitative trait loci detection. However, increased genetic complexity challenges detection methods, with discordant results due to low data quality or complex genetic architecture. We quantified the impact of theses factors across three mouse crosses and two different detection methods, identifying procedures that greatly improve detection quality. Importantly, HS populations have complex genetic architectures not fully captured by the whole genome kinship matrix, calling for incorporating chromosome specific relatedness information. We analyze three increasingly complex crosses, using gene expression levels as quantitative traits. The three crosses were an F(2) intercross, a HS formed by crossing four inbred strains (HS4), and a HS (HS-CC) derived from the eight lines found in the collaborative cross. Brain (striatum) gene expression and genotype data were obtained using the Illumina platform. We found large disparities between methods, with concordance varying as genetic complexity increased; this problem was more acute for probes with distant regulatory elements (trans). A suite of data filtering steps resulted in substantial increases in reproducibility. Genetic relatedness between samples generated overabundance of detected eQTLs; an adjustment procedure that includes the kinship matrix attenuates this problem. However, we find that relatedness between individuals is not evenly distributed across the genome; information from distinct chromosomes results in relatedness structure different from the whole genome kinship matrix. Shared polymorphisms from distinct chromosomes collectively affect expression levels, confounding eQTL detection. We suggest that considering chromosome specific relatedness can result in improved eQTL detection.
Jupiter Data Analysis Program: Analysis of Voyager wideband plasma wave observations
NASA Technical Reports Server (NTRS)
Kurth, W. S.
1983-01-01
Voyager plasma wave wideband frames from the Jovian encounters are analyzed. The 511 frames which were analyzed were chosen on the basis of low-rate spectrum analyzer data from the plasma wave receiver. These frames were obtained in regions and during times of various types of plasma or radio wave activity as determined by the low-rate, low-resolution data and were processed in order to provide high resolution measurements of the plasma wave spectrum for use in the study of a number of outstanding problems. Chorus emissions at Jupiter were analyzed. The detailed temporal and spectral form of the very complex chorus emissions near L = 8 on the Voyager 1 inbound passage was compared to both terrestrial chorus emissions as well as to the theory which was developed to explain the terrestrial waves.
Artificial intelligence applied to process signal analysis
NASA Technical Reports Server (NTRS)
Corsberg, Dan
1988-01-01
Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.
The Complex Economic System of Supply Chain Financing
NASA Astrophysics Data System (ADS)
Zhang, Lili; Yan, Guangle
Supply Chain Financing (SCF) refers to a series of innovative and complicated financial services based on supply chain. The SCF set-up is a complex system, where the supply chain management and Small and Medium Enterprises (SMEs) financing services interpenetrate systematically. This paper establishes the organization structure of SCF System, and presents two financing models respectively, with or without the participation of the third-party logistic provider (3PL). Using Information Economics and Game Theory, the interrelationship among diverse economic sectors is analyzed, and the economic mechanism of development and existent for SCF system is demonstrated. New thoughts and approaches to solve SMEs financing problem are given.
Observation of atmospheric time variation of Mira stars using Interferometry
NASA Astrophysics Data System (ADS)
Lacour, S.; Perrin, G.; Haubois, X.; Meimon, S.; Monnier, J.; Berger, J. P.; Traub, W.; Schuller, P.
2006-08-01
Interferometric data of Mira type stars in the near-infrared have already produce radial visibility curves with a shape far from simple limb darkening profile. The measured visibilities as a function of wavelength revealed the presence in K band of a close ( at a ~ 1 stellar radius distance above the photosphere ) molecular layer. However, thanks to the phase closure and telescope mobility of the IOTA interferometer, we have now access to the two dimensional complex visibility profile. We will present the u-v plane of different Mira stars at different epochs, and we will discuss the problems and advantages of analyzing complex objects in the Fourier domain.
Dimensionality of visual complexity in computer graphics scenes
NASA Astrophysics Data System (ADS)
Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce
2008-02-01
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.
Mesoscale modeling: solving complex flows in biology and biotechnology.
Mills, Zachary Grant; Mao, Wenbin; Alexeev, Alexander
2013-07-01
Fluids are involved in practically all physiological activities of living organisms. However, biological and biorelated flows are hard to analyze due to the inherent combination of interdependent effects and processes that occur on a multitude of spatial and temporal scales. Recent advances in mesoscale simulations enable researchers to tackle problems that are central for the understanding of such flows. Furthermore, computational modeling effectively facilitates the development of novel therapeutic approaches. Among other methods, dissipative particle dynamics and the lattice Boltzmann method have become increasingly popular during recent years due to their ability to solve a large variety of problems. In this review, we discuss recent applications of these mesoscale methods to several fluid-related problems in medicine, bioengineering, and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.
A multilevel finite element method for Fredholm integral eigenvalue problems
NASA Astrophysics Data System (ADS)
Xie, Hehu; Zhou, Tao
2015-12-01
In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.
Winskell, Kate; McFarland, Deborah A.; del Rio, Carlos
2015-01-01
Global health is a dynamic, emerging, and interdisciplinary field. To address current and emerging global health challenges, we need a public health workforce with adaptable and collaborative problem-solving skills. In the 2013–2014 academic year, the Hubert Department of Global Health at the Rollins School of Public Health–Emory University launched an innovative required core course for its first-year Master of Public Health students in the global health track. The course uses a case-based, problem-based learning approach to develop global health competencies. Small teams of students propose solutions to these problems by identifying learning issues and critically analyzing and synthesizing new information. We describe the course structure and logistics used to apply this approach in the context of a large class and share lessons learned. PMID:25706029
Experimental realization of a one-way quantum computer algorithm solving Simon's problem.
Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G
2014-11-14
We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.
Analysis and Reduction of Complex Networks Under Uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghanem, Roger G
2014-07-31
This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less
Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Xiujuan; Chen, Jiapei
2017-03-01
Due to the existence of complexities of heterogeneities, hierarchy, discreteness, and interactions in municipal solid waste management (MSWM) systems such as Beijing, China, a series of socio-economic and eco-environmental problems may emerge or worsen and result in irredeemable damages in the following decades. Meanwhile, existing studies, especially ones focusing on MSWM in Beijing, could hardly reflect these complexities in system simulations and provide reliable decision support for management practices. Thus, a framework of distributed mixed-integer fuzzy hierarchical programming (DMIFHP) is developed in this study for MSWM under these complexities. Beijing is selected as a representative case. The Beijing MSWM system is comprehensively analyzed in many aspects such as socio-economic conditions, natural conditions, spatial heterogeneities, treatment facilities, and system complexities, building a solid foundation for system simulation and optimization. Correspondingly, the MSWM system in Beijing is discretized as 235 grids to reflect spatial heterogeneity. A DMIFHP model which is a nonlinear programming problem is constructed to parameterize the Beijing MSWM system. To enable scientific solving of it, a solution algorithm is proposed based on coupling of fuzzy programming and mixed-integer linear programming. Innovations and advantages of the DMIFHP framework are discussed. The optimal MSWM schemes and mechanism revelations will be discussed in another companion paper due to length limitation.
Registered nurses' clinical reasoning skills and reasoning process: A think-aloud study.
Lee, JuHee; Lee, Young Joo; Bae, JuYeon; Seo, Minjeong
2016-11-01
As complex chronic diseases are increasing, nurses' prompt and accurate clinical reasoning skills are essential. However, little is known about the reasoning skills of registered nurses. This study aimed to determine how registered nurses use their clinical reasoning skills and to identify how the reasoning process proceeds in the complex clinical situation of hospital setting. A qualitative exploratory design was used with a think-aloud method. A total of 13 registered nurses (mean years of experience=11.4) participated in the study, solving an ill-structured clinical problem based on complex chronic patients cases in a hospital setting. Data were analyzed using deductive content analysis. Findings showed that the registered nurses used a variety of clinical reasoning skills. The most commonly used skill was 'checking accuracy and reliability.' The reasoning process of registered nurses covered assessment, analysis, diagnosis, planning/implementation, and evaluation phase. It is critical that registered nurses apply appropriate clinical reasoning skills in complex clinical practice. The main focus of registered nurses' reasoning in this study was assessing a patient's health problem, and their reasoning process was cyclic, rather than linear. There is a need for educational strategy development to enhance registered nurses' competency in determining appropriate interventions in a timely and accurate fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ranking in evolving complex networks
NASA Astrophysics Data System (ADS)
Liao, Hao; Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng; Zhou, Ming-Yang
2017-05-01
Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithms perform, and which are their possible biases that may impair their effectiveness. Many popular ranking algorithms (such as Google's PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks. We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes.
Decision support systems and methods for complex networks
Huang, Zhenyu [Richland, WA; Wong, Pak Chung [Richland, WA; Ma, Jian [Richland, WA; Mackey, Patrick S [Richland, WA; Chen, Yousu [Richland, WA; Schneider, Kevin P [Seattle, WA
2012-02-28
Methods and systems for automated decision support in analyzing operation data from a complex network. Embodiments of the present invention utilize these algorithms and techniques not only to characterize the past and present condition of a complex network, but also to predict future conditions to help operators anticipate deteriorating and/or problem situations. In particular, embodiments of the present invention characterize network conditions from operation data using a state estimator. Contingency scenarios can then be generated based on those network conditions. For at least a portion of all of the contingency scenarios, risk indices are determined that describe the potential impact of each of those scenarios. Contingency scenarios with risk indices are presented visually as graphical representations in the context of a visual representation of the complex network. Analysis of the historical risk indices based on the graphical representations can then provide trends that allow for prediction of future network conditions.
Lessons Learned from Crowdsourcing Complex Engineering Tasks.
Staffelbach, Matthew; Sempolinski, Peter; Kijewski-Correa, Tracy; Thain, Douglas; Wei, Daniel; Kareem, Ahsan; Madey, Gregory
2015-01-01
Crowdsourcing is the practice of obtaining needed ideas, services, or content by requesting contributions from a large group of people. Amazon Mechanical Turk is a web marketplace for crowdsourcing microtasks, such as answering surveys and image tagging. We explored the limits of crowdsourcing by using Mechanical Turk for a more complicated task: analysis and creation of wind simulations. Our investigation examined the feasibility of using crowdsourcing for complex, highly technical tasks. This was done to determine if the benefits of crowdsourcing could be harnessed to accurately and effectively contribute to solving complex real world engineering problems. Of course, untrained crowds cannot be used as a mere substitute for trained expertise. Rather, we sought to understand how crowd workers can be used as a large pool of labor for a preliminary analysis of complex data. We compared the skill of the anonymous crowd workers from Amazon Mechanical Turk with that of civil engineering graduate students, making a first pass at analyzing wind simulation data. For the first phase, we posted analysis questions to Amazon crowd workers and to two groups of civil engineering graduate students. A second phase of our experiment instructed crowd workers and students to create simulations on our Virtual Wind Tunnel website to solve a more complex task. With a sufficiently comprehensive tutorial and compensation similar to typical crowd-sourcing wages, we were able to enlist crowd workers to effectively complete longer, more complex tasks with competence comparable to that of graduate students with more comprehensive, expert-level knowledge. Furthermore, more complex tasks require increased communication with the workers. As tasks become more complex, the employment relationship begins to become more akin to outsourcing than crowdsourcing. Through this investigation, we were able to stretch and explore the limits of crowdsourcing as a tool for solving complex problems.
Complex network problems in physics, computer science and biology
NASA Astrophysics Data System (ADS)
Cojocaru, Radu Ionut
There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe lattice at zero temperature and then we apply this formalism to the K-SAT problem defined in Chapter 1. The phase transition which physicists study often corresponds to a change in the computational complexity of the corresponding computer science problem. Chapter 3 presents phase transitions which are specific to the problems discussed in Chapter 1 and also known results for the K-SAT problem. We discuss the replica method and experimental evidences of replica symmetry breaking. The physics approach to hard problems is based on replica methods which are difficult to understand. In Chapter 4 we develop novel methods for studying hard problems using methods similar to the message passing techniques that were discussed in Chapter 2. Although we concentrated on the symmetric case, cavity methods show promise for generalizing our methods to the un-symmetric case. As has been highlighted by John Hopfield, several key features of biological systems are not shared by physical systems. Although living entities follow the laws of physics and chemistry, the fact that organisms adapt and reproduce introduces an essential ingredient that is missing in the physical sciences. In order to extract information from networks many algorithm have been developed. In Chapter 5 we apply polynomial algorithms like minimum spanning tree in order to study and construct gene regulatory networks from experimental data. As future work we propose the use of algorithms like min-cut/max-flow and Dijkstra for understanding key properties of these networks.
NASA Technical Reports Server (NTRS)
Huerre, P.; Karamcheti, K.
1976-01-01
The theory of sound propagation is examined in a viscous, heat-conducting fluid, initially at rest and in a uniform state, and contained in a rigid, impermeable duct with isothermal walls. Topics covered include: (1) theoretical formulation of the small amplitude fluctuating motions of a viscous, heat-conducting and compressible fluid; (2) sound propagation in a two dimensional duct; and (3) perturbation study of the inplane modes.
Effective Capital Provision Within Government. Methodologies for Right-Sizing Base Infrastructure
2005-01-01
unknown distributions, since they more accurately represent the complexity of real -world problems. Forecasting uncertain future demand flows is critical to...ordering system with no time lags and no additional costs for instantaneous delivery, shortage and holding costs would be eliminated, because the...order a fixed quantity, Q. 4.1.4 Analyzed Time Step Time is an important dimension in inventory models, since the way the system changes over time affects
Good practices in managing work-related indoor air problems: a psychosocial perspective.
Lahtinen, Marjaana; Huuhtanen, Pekka; Vähämäki, Kari; Kähkönen, Erkki; Mussalo-Rauhamaa, Helena; Reijula, Kari
2004-07-01
Indoor air problems at workplaces are often exceedingly complex. Technical questions are interrelated with the dynamics of the work community, and the cooperation and interaction skills of the parties involved in the problem solving process are also put to the test. The objective of our study was to analyze the process of managing and solving indoor air problems from a psychosocial perspective. This collective case study was based on data from questionnaires, interviews and various documentary materials. Technical inspections of the buildings and indoor air measurements were also carried out. The following four factors best differentiated successful cases from impeded cases: extensive multiprofessional collaboration and participative action, systematic action and perseverance, investment in information and communication, and process thinking and learning. The study also proposed a theoretical model for the role of the psychosocial work environment in indoor air problems. The expertise related to social and human aspects of problem solving plays a significant role in solving indoor air problems. Failures to properly handle these aspects may lead to resources being wasted and result in a problematic situation becoming stagnant or worse. Copyright 2004 Wiley-Liss, Inc.
On the complexity of some quadratic Euclidean 2-clustering problems
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Pyatkin, A. V.
2016-03-01
Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).
Automatic detection of artifacts in converted S3D video
NASA Astrophysics Data System (ADS)
Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail
2014-03-01
In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.
Henschke, Cornelia
2012-05-01
The regulations for financing assistive technology devices (ATDs) are complex and fragmented and, thus, might influence adequate provision of these devices to people who need multiple ATDs. This study aims to explore and analyze patients' problems with the provision and financing of ATDs for the following two rare diseases: amyotrophic lateral sclerosis (ALS) and Duchenne muscular dystrophy (DMD). A survey was conducted by means of semi-standardized questionnaires addressing the issues of coverage decisions for ATDs and problems with provision of ATDs. Information was retrieved from ALS (n=19) and DMD (n=14) patients. Conducted interviews were transcribed verbatim and analyzed using qualitative content analysis. Respondents experienced difficulties with the provision and financing of ATDs. They underlined problems such as long approval processes and a serious bureaucratic burden, which induced inadequate provision of ATDs. Experiences of ALS and DMD respondents frequently were similar, especially regarding financing decisions and the process of decision making by sickness funds. The results suggest that difficulties in receiving and financing ATDs are common problems among ALS and DMD patients. There is a need for an interdisciplinary approach in the provision of ATDs and their financing, which should be coordinated by case managers. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Remote analysis of anthropogenic effect on boreal forests using nonlinear multidimensional models
NASA Astrophysics Data System (ADS)
Shchemel, Anton; Ivanova, Yuliya; Larko, Alexander
Nowadays anthropogenic stress of mining and refining oil and gas is becoming significant prob-lem in Eastern Siberia. The task of revealing effect of that industry is not trivial because of complicated access to the sites of mining. Due to that, severe problem of supplying detection of oil and gas complex effect on forest ecosystems arises. That estimation should allow revealing the sites of any negative changes in forest communities in proper time. The intellectual system of analyzing remote sensing data of different resolution and different spectral characteristics with sophisticated nonlinear models is dedicated to solve the problem. The work considers re-mote detection and estimation of forest degradation using analysis of free remote sensing data without total field observations of oil and gas mining territory. To analyze a state of vegetation the following remote sensing data were used as input parameters for our models: albedo, surface temperature and data of about thirty spectral bands in visible and infrared region. The data of MODIS satellite from the year 2000 was used. Chosen data allowed producing complex estima-tion of parameters linked with the quality (set of species, physiological state) and the quantity of vegetation. To verify obtained estimation each index was calculated for a territory in which oil and gas mining is provided along with the same calculations for a sample "clear" territory. Monthly data for vegetation period and annual mean values were analyzed. The work revealed some trends of annual data probably linked with intensification of anthropogenic effect on the ecosystems. The models we managed to build are easy to apply for using by fair personnel of emergency control and oversight institutions. It was found to be helpful to use exactly the full set of values obtained from the satellite for multilateral estimation of anthropogenic effect on forest ecosystems of objects of the oil mining industry for producing generalized estimation indices by the developed models.
Multiphysics analysis of liquid metal annular linear induction pumps: A project overview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maidana, Carlos Omar; Nieminen, Juha E.
Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flowmore » rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.« less
Multiphysics analysis of liquid metal annular linear induction pumps: A project overview
Maidana, Carlos Omar; Nieminen, Juha E.
2016-03-14
Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flowmore » rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.« less
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
Approximation, abstraction and decomposition in search and optimization
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1992-01-01
In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.
Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control
NASA Astrophysics Data System (ADS)
Kamyar, Reza
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.
Spear, Timothy T; Nishimura, Michael I; Simms, Patricia E
2017-08-01
Advancement in flow cytometry reagents and instrumentation has allowed for simultaneous analysis of large numbers of lineage/functional immune cell markers. Highly complex datasets generated by polychromatic flow cytometry require proper analytical software to answer investigators' questions. A problem among many investigators and flow cytometry Shared Resource Laboratories (SRLs), including our own, is a lack of access to a flow cytometry-knowledgeable bioinformatics team, making it difficult to learn and choose appropriate analysis tool(s). Here, we comparatively assess various multidimensional flow cytometry software packages for their ability to answer a specific biologic question and provide graphical representation output suitable for publication, as well as their ease of use and cost. We assessed polyfunctional potential of TCR-transduced T cells, serving as a model evaluation, using multidimensional flow cytometry to analyze 6 intracellular cytokines and degranulation on a per-cell basis. Analysis of 7 parameters resulted in 128 possible combinations of positivity/negativity, far too complex for basic flow cytometry software to analyze fully. Various software packages were used, analysis methods used in each described, and representative output displayed. Of the tools investigated, automated classification of cellular expression by nonlinear stochastic embedding (ACCENSE) and coupled analysis in Pestle/simplified presentation of incredibly complex evaluations (SPICE) provided the most user-friendly manipulations and readable output, evaluating effects of altered antigen-specific stimulation on T cell polyfunctionality. This detailed approach may serve as a model for other investigators/SRLs in selecting the most appropriate software to analyze complex flow cytometry datasets. Further development and awareness of available tools will help guide proper data analysis to answer difficult biologic questions arising from incredibly complex datasets. © Society for Leukocyte Biology.
Reliability Standards of Complex Engineering Systems
NASA Astrophysics Data System (ADS)
Galperin, E. M.; Zayko, V. A.; Gorshkalev, P. A.
2017-11-01
Production and manufacture play an important role in today’s modern society. Industrial production is nowadays characterized by increased and complex communications between its parts. The problem of preventing accidents in a large industrial enterprise becomes especially relevant. In these circumstances, the reliability of enterprise functioning is of particular importance. Potential damage caused by an accident at such enterprise may lead to substantial material losses and, in some cases, can even cause a loss of human lives. That is why industrial enterprise functioning reliability is immensely important. In terms of their reliability, industrial facilities (objects) are divided into simple and complex. Simple objects are characterized by only two conditions: operable and non-operable. A complex object exists in more than two conditions. The main characteristic here is the stability of its operation. This paper develops the reliability indicator combining the set theory methodology and a state space method. Both are widely used to analyze dynamically developing probability processes. The research also introduces a set of reliability indicators for complex technical systems.
Complex Problem Solving: What It Is and What It Is Not
Dörner, Dietrich; Funke, Joachim
2017-01-01
Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems. PMID:28744242
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
Despotović, Ivana
2015-01-01
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121
Alignment of RNA molecules: Binding energy and statistical properties of random sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valba, O. V., E-mail: valbaolga@gmail.com; Nechaev, S. K., E-mail: sergei.nechaev@gmail.com; Tamm, M. V., E-mail: thumm.m@gmail.com
2012-02-15
A new statistical approach to the problem of pairwise alignment of RNA sequences is proposed. The problem is analyzed for a pair of interacting polymers forming an RNA-like hierarchical cloverleaf structures. An alignment is characterized by the numbers of matches, mismatches, and gaps. A weight function is assigned to each alignment; this function is interpreted as a free energy taking into account both direct monomer-monomer interactions and a combinatorial contribution due to formation of various cloverleaf secondary structures. The binding free energy is determined for a pair of RNA molecules. Statistical properties are discussed, including fluctuations of the binding energymore » between a pair of RNA molecules and loop length distribution in a complex. Based on an analysis of the free energy per nucleotide pair complexes of random RNAs as a function of the number of nucleotide types c, a hypothesis is put forward about the exclusivity of the alphabet c = 4 used by nature.« less
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
Optical techniques to feed and control GaAs MMIC modules for phased array antenna applications
NASA Astrophysics Data System (ADS)
Bhasin, K. B.; Anzic, G.; Kunath, R. R.; Connolly, D. J.
A complex signal distribution system is required to feed and control GaAs monolithic microwave integrated circuits (MMICs) for phased array antenna applications above 20 GHz. Each MMIC module will require one or more RF lines, one or more bias voltage lines, and digital lines to provide a minimum of 10 bits of combined phase and gain control information. In a closely spaced array, the routing of these multiple lines presents difficult topology problems as well as a high probability of signal interference. To overcome GaAs MMIC phased array signal distribution problems optical fibers interconnected to monolithically integrated optical components with GaAs MMIC array elements are proposed as a solution. System architecture considerations using optical fibers are described. The analog and digital optical links to respectively feed and control MMIC elements are analyzed. It is concluded that a fiber optic network will reduce weight and complexity, and increase reliability and performance, but higher power will be required.
Optical techniques to feed and control GaAs MMIC modules for phased array antenna applications
NASA Technical Reports Server (NTRS)
Bhasin, K. B.; Anzic, G.; Kunath, R. R.; Connolly, D. J.
1986-01-01
A complex signal distribution system is required to feed and control GaAs monolithic microwave integrated circuits (MMICs) for phased array antenna applications above 20 GHz. Each MMIC module will require one or more RF lines, one or more bias voltage lines, and digital lines to provide a minimum of 10 bits of combined phase and gain control information. In a closely spaced array, the routing of these multiple lines presents difficult topology problems as well as a high probability of signal interference. To overcome GaAs MMIC phased array signal distribution problems optical fibers interconnected to monolithically integrated optical components with GaAs MMIC array elements are proposed as a solution. System architecture considerations using optical fibers are described. The analog and digital optical links to respectively feed and control MMIC elements are analyzed. It is concluded that a fiber optic network will reduce weight and complexity, and increase reliability and performance, but higher power will be required.
The cost of conservative synchronization in parallel discrete event simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.
Optical engineering: understanding optical system by experiments
NASA Astrophysics Data System (ADS)
Scharf, Toralf
2017-08-01
Students have to be educated in theoretical and practical matters. Only one of them does not allow attacking complex problems in research, development, and management. After their study, students should be able to design, construct and analyze technical problems at highest levels of complexity. Who never experienced the difficulty of setting up measurements will not be able to understand, plan and manage such complex tasks in her/his future career. At EPFL a course was developed for bachelor education and is based on three pillars: concrete actions (enactive) to be done by the students, a synthesis of their work by writing a report (considered as the iconic part) and inputs from the teacher to generalize the findings and link it to a possible complete abstract description (symbolic). Intensive tutoring allowed an intermodal transfer between these categories. This EIS method originally introduced by Jerome Bruner for small children is particular well adapted for engineer education for which theoretical understanding often is not enough. The symbiosis of ex-cathedra lecture and practical work in a classroom-like situation presents an innovative step towards integrated learning that complements perfectly more abstract course principles like online courses.
Quantum communication complexity of establishing a shared reference frame.
Rudolph, Terry; Grover, Lov
2003-11-21
We discuss the aligning of spatial reference frames from a quantum communication complexity perspective. This enables us to analyze multiple rounds of communication and give several simple examples demonstrating tradeoffs between the number of rounds and the type of communication. Using a distributed variant of a quantum computational algorithm, we give an explicit protocol for aligning spatial axes via the exchange of spin-1/2 particles which makes no use of either exchanged entangled states, or of joint measurements. This protocol achieves a worst-case fidelity for the problem of "direction finding" that is asymptotically equivalent to the optimal average case fidelity achievable via a single forward communication of entangled states.
Media framing of complex issues: The case of endangered languages.
Rivenburgh, Nancy K
2013-08-01
This study investigates how media frame a global trend that is complex in nature, emergent in terms of scientific understanding, and has public policy implications: the rapid disappearance of languages. It analyzes how English-language media from 15 western, industrialized countries frame the causes and implications of endangered languages over 35 years (1971-2006) - a time period notable for growing, interdisciplinary concerns over the potential negative impacts of losing the world's linguistic diversity. The results reveal a media discourse characterized by three complementary frames that are sympathetic to the plight of endangered languages, but that present the problem, its cause, and societal implications in a logical structure that would promote public complacency.
Computational 3D structures of drug-targeting proteins in the 2009-H1N1 influenza A virus
NASA Astrophysics Data System (ADS)
Du, Qi-Shi; Wang, Shu-Qing; Huang, Ri-Bo; Chou, Kuo-Chen
2010-01-01
The neuraminidase (NA) and M2 proton channel of influenza virus are the drug-targeting proteins, based on which several drugs were developed. However these once powerful drugs encountered drug-resistant problem to the H5N1 and H1N1 flu. To address this problem, the computational 3D structures of NA and M2 proteins of 2009-H1N1 influenza virus were built using the molecular modeling technique and computational chemistry method. Based on the models the structure features of NA and M2 proteins were analyzed, the docking structures of drug-protein complexes were computed, and the residue mutations were annotated. The results may help to solve the drug-resistant problem and stimulate designing more effective drugs against 2009-H1N1 influenza pandemic.
Role of the state in solving the environmental problems of the industrial monoprofile cities
NASA Astrophysics Data System (ADS)
Musina, L. M.; Neucheva, M. U.
2018-01-01
Nowadays the problem of sustainable socio-economic development of monotowns refers to one of the priority issues of the state policy. The author analyzes monotowns state policy support in Russia with main focus on programs aimed at ecological restoration of industrial monoprofile cities. The processes of program control in monotowns within the state economic policy are analyzed. In order to evaluate the results of programs (of city-forming enterprises and monotowns level) the principles of development of criteria development system have been substantiated. The environmental situation of monotowns depends on a complex system of interaction between the city (represented by its people and municipal authorities), private capital and the state. Long-term sustainable development of monotowns requires the interests of all three parties to be in balance. This is possible to achieve by increasing the social responsibility of businesses, increasing the development of local government and urban identity and active influence of local communities on the activities of the municipal authorities.
NASA Technical Reports Server (NTRS)
Riccio, Gary E.; McDonald, P. Vernon
1998-01-01
The purpose of this report is to identify the essential characteristics of goal-directed whole-body motion. The report is organized into three major sections (Sections 2, 3, and 4). Section 2 reviews general themes from ecological psychology and control-systems engineering that are relevant to the perception and control of whole-body motion. These themes provide an organizational framework for analyzing the complex and interrelated phenomena that are the defining characteristics of whole-body motion. Section 3 of this report applies the organization framework from the first section to the problem of perception and control of aircraft motion. This is a familiar problem in control-systems engineering and ecological psychology. Section 4 examines an essential but generally neglected aspect of vehicular control: coordination of postural control and vehicular control. To facilitate presentation of this new idea, postural control and its coordination with vehicular control are analyzed in terms of conceptual categories that are familiar in the analysis of vehicular control.
Elephants, people, parks and development: the case of the Luangwa Valley, Zambia
NASA Astrophysics Data System (ADS)
Abel, Nick; Blaikie, Piers
1986-11-01
New ideas about conserving wildlife are emerging to compete with conventional national park policies. But methods of analyzing wildlife conservation problems in Africa are inadequate for the analysis of complex issues of policy. Much of the analysis of conservation policy attempts to be ‘apolitical’ on issues charged with social conflict. Analyses are too often ahistorical when history can say a great deal about the origins of present-day ecological problems. Further-more, problems are commonly analyzed within narrow discilinary frameworks which predetermine the nature of conclusions and lead to professionally biased proposals. This case study of the Luangwa Valley, Zambia, is used to demonstrate a method which attempts to remedy these weaknesses, In the first part of the article we examine the role of the Luangwa National Parks in the context of the Zambian political economy, and identify social groups which compete for the resources of the national parks. Next we trace the historical origins of present-day ecological changes. These analyses lead toward a model of the Parks and some of their relationships with the national economy. We end with a proposal for communal use of wildlife which attempts to resolve some of the contradictions inherent in current policy.
United States Air Force Research Initiation Program for 1987. Volume 1
1989-04-01
complexity for analyzing such models depends upon the repair or replace- ment times distributions, the repair policy for damaged components and a...distributions, repair policy for various comDonents and a number of other factors. Problems o interest for such models include the determinations of (a...Thus. some more assumption is needed as to the order in which repair is to be made when more than one component is damaged. We will adopt a policy
Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures
NASA Technical Reports Server (NTRS)
James, Benjamin Wylie
1935-01-01
This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.
Fine-scale traverses in cumulate rocks, Stillwater Complex: A lunar analogue study
NASA Technical Reports Server (NTRS)
Elthon, Donald
1988-01-01
The objective was to document finite-scale compositional variations in cumulate rocks from the Stillwater Complex in Montana and to interpret these data in the context of planetary magma fractionation processes such as those operative during the formation of the Earth's Moon. This research problem involved collecting samples in the Stillwater Complex and analyzing them by electron microprobe, X-ray fluorescence (XRF), and instrumental neutron activation analysis (INAA). The electron microprobe is used to determine the compositions of cumulus and intercumulus phases in the rocks, the XRF is used to determine the bulk-rock major element and trace element (Y, Sr, Rb, Zr, Ni, and Cr) abundances, and the INAA lab. is used to determine the trace element (Sc, Co, Cr, Ni, Ta, Hf, U, Th, and the REE) abundances of mineral separates and bulk rocks.
How Unstable Are Complex Financial Systems? Analyzing an Inter-bank Network of Credit Relations
NASA Astrophysics Data System (ADS)
Sinha, Sitabhra; Thess, Maximilian; Markose, Sheri
The recent worldwide economic crisis of 2007-09 has focused attention on the need to analyze systemic risk in complex financial networks. We investigate the problem of robustness of such systems in the context of the general theory of dynamical stability in complex networks and, in particular, how the topology of connections influence the risk of the failure of a single institution triggering a cascade of successive collapses propagating through the network. We use data on bilateral liabilities (or exposure) in the derivatives market between 202 financial intermediaries based in USA and Europe in the last quarter of 2009 to empirically investigate the network structure of the over-the-counter (OTC) derivatives market. We observe that the network exhibits both heterogeneity in node properties and the existence of communities. It also has a prominent core-periphery organization and can resist large-scale collapse when subjected to individual bank defaults (however, failure of any bank in the core may result in localized collapse of the innermost core with substantial loss of capital) but is vulnerable to system-wide breakdown as a result of an accompanying liquidity crisis.
Simulating the Composite Propellant Manufacturing Process
NASA Technical Reports Server (NTRS)
Williamson, Suzanne; Love, Gregory
2000-01-01
There is a strategic interest in understanding how the propellant manufacturing process contributes to military capabilities outside the United States. The paper will discuss how system dynamics (SD) has been applied to rapidly assess the capabilities and vulnerabilities of a specific composite propellant production complex. These facilities produce a commonly used solid propellant with military applications. The authors will explain how an SD model can be configured to match a specific production facility followed by a series of scenarios designed to analyze operational vulnerabilities. By using the simulation model to rapidly analyze operational risks, the analyst gains a better understanding of production complexities. There are several benefits of developing SD models to simulate chemical production. SD is an effective tool for characterizing complex problems, especially the production process where the cascading effect of outages quickly taxes common understanding. By programming expert knowledge into an SD application, these tools are transformed into a knowledge management resource that facilitates rapid learning without requiring years of experience in production operations. It also permits the analyst to rapidly respond to crisis situations and other time-sensitive missions. Most importantly, the quantitative understanding gained from applying the SD model lends itself to strategic analysis and planning.
MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.
Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd
2018-07-01
Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.
Power-law statistics of neurophysiological processes analyzed using short signals
NASA Astrophysics Data System (ADS)
Pavlova, Olga N.; Runnova, Anastasiya E.; Pavlov, Alexey N.
2018-04-01
We discuss the problem of quantifying power-law statistics of complex processes from short signals. Based on the analysis of electroencephalograms (EEG) we compare three interrelated approaches which enable characterization of the power spectral density (PSD) and show that an application of the detrended fluctuation analysis (DFA) or the wavelet-transform modulus maxima (WTMM) method represents a useful way of indirect characterization of the PSD features from short data sets. We conclude that despite DFA- and WTMM-based measures can be obtained from the estimated PSD, these tools outperform the standard spectral analysis when characterization of the analyzed regime should be provided based on a very limited amount of data.
Charpiat, B; Mille, F; Fombeur, P; Machon, J; Zawadzki, E; Bobay-Madic, A
2018-05-21
The development of information systems in French hospitals is mandatory. The aim of this work was to analyze the content of exchanges carried out within social networks, dealing with problems encountered with hospital pharmacies information systems. Messages exchanged via the mailing list of the Association pour le Digital et l'Information en Pharmacie and abstracts of communications presented at hospital pharmacists trade union congresses were analyzed. Those referring to information systems used in hospital pharmacies were selected. From March 2015 to June 2016, 122 e-mails sent by 80 pharmacists concerned information systems. From 2002 to 2016, 45 abstracts dealt with this topic. Problems most often addressed in these 167 documents were "parameterization and/or functionalities" (n=116), interfaces and complexity of the hospital information systems (n=52), relationship with health information technologies vendors and poor reactivity (n=32), additional workload (n=32), ergonomics (n=30), insufficient user training (n=22). These problems are interdependent, lead to errors and in order to mitigate their consequences, they compel pharmacy professionals to divert a significant amount of working hours to the detriment of pharmaceutical care and dispensing and preparing drugs. Hospital pharmacists are faced with many problems of insecurity and inefficiency generated by information systems. Researches are warranted to determine their cost, specify their deleterious effects on care and identify the safest information systems. Copyright © 2018 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
Mitigating Local Natural Disaster through Social Aware Preparedness Using Complexity Approach
NASA Astrophysics Data System (ADS)
Supadli, Irwan; Saputri, Andini; Mawengkang, Herman
2018-01-01
During and after natural disaster, such as, eruption of vulcano, many people have to abandon their living place to a temporary shelter. Usually, there could be several time for the occurrence of the eruption. This situation, for example, happened at Sinabung vulcano, located in Karo district of North Sumatera Province, Indonesia. The people in the disaster area have become indifferent. In terms of the society, the local natural disaster problem belong to a complex societal problem. This research is to find a way what should be done to these society to raise their social awareness that they had experienced serious natural disaster and they will be able to live normally and sustainable as before. Societal complexity approach is used to solve the problems. Social studies referred to in this activity are to analyze the social impacts arising from the implementation of the relocation itself. Scope of social impact assessments include are The social impact of the development program of relocation, including the impact of construction activities and long-term impact of construction activity, particularly related to the source and use of clean water, sewerage system, drainage and waste management (solid waste), Social impacts arising associated with occupant relocation sites and the availability of infrastructure (public facilities, include: worship facilities, health and education) in the local environment (pre-existing). Social analysis carried out on the findings of the field, the study related documents and observations of the condition of the existing social environment Siosar settlements.
Current algorithmic solutions for peptide-based proteomics data generation and identification.
Hoopmann, Michael R; Moritz, Robert L
2013-02-01
Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics. Copyright © 2012 Elsevier Ltd. All rights reserved.
Jin, Guangwei; Li, Kuncheng; Hu, Yingying; Qin, Yulin; Wang, Xiangqing; Xiang, Jie; Yang, Yanhui; Lu, Jie; Zhong, Ning
2011-11-01
To compare the blood oxygen level-dependent (BOLD) response, measured with functional magnetic resonance (MR) imaging, in the posterior cingulate cortex (PCC) and adjacent precuneus regions between healthy control subjects and patients with amnestic mild cognitive impairment (MCI) during problem-solving tasks. This study was approved by the institutional review board. Each subject provided written informed consent. Thirteen patients with amnestic MCI and 13 age- and sex-matched healthy control subjects participated in the study. The functional magnetic resonance (MR) imaging tasks were simplified 4 × 4-grid number placement puzzles that were divided into a simple task (using the row rule or the column rule to solve the puzzle) and a complex task (using both the row and column rules to solve the puzzle). Behavioral results and functional imaging results between the healthy control group and the amnestic MCI group were analyzed. The accuracy for the complex task in the healthy control group was significantly higher than that in the amnestic MCI group (P < .05). The healthy control group exhibited a deactivated BOLD signal intensity (SI) change in the bilateral PCC and adjacent precuneus regions during the complex task, whereas the amnestic MCI group showed activation. The positive linear correlations between the BOLD SI change in bilateral PCC and adjacent precuneus regions and in bilateral hippocampi in the amnestic MCI group were significant (P < .001), while in the healthy control group, they were not (P ≥ .23). These findings suggest that an altered BOLD response in amnestic MCI patients during complex tasks might be related to a decline in problem-solving ability and to memory impairment and, thus, may indicate a compensatory response to memory impairment. RSNA, 2011
NASA Astrophysics Data System (ADS)
Wang, Rongxi; Gao, Xu; Gao, Jianmin; Gao, Zhiyong; Kang, Jiani
2018-02-01
As one of the most important approaches for analyzing the mechanism of fault pervasion, fault root cause tracing is a powerful and useful tool for detecting the fundamental causes of faults so as to prevent any further propagation and amplification. Focused on the problems arising from the lack of systematic and comprehensive integration, an information transfer-based novel data-driven framework for fault root cause tracing of complex electromechanical systems in the processing industry was proposed, taking into consideration the experience and qualitative analysis of conventional fault root cause tracing methods. Firstly, an improved symbolic transfer entropy method was presented to construct a directed-weighted information model for a specific complex electromechanical system based on the information flow. Secondly, considering the feedback mechanisms in the complex electromechanical systems, a method for determining the threshold values of weights was developed to explore the disciplines of fault propagation. Lastly, an iterative method was introduced to identify the fault development process. The fault root cause was traced by analyzing the changes in information transfer between the nodes along with the fault propagation pathway. An actual fault root cause tracing application of a complex electromechanical system is used to verify the effectiveness of the proposed framework. A unique fault root cause is obtained regardless of the choice of the initial variable. Thus, the proposed framework can be flexibly and effectively used in fault root cause tracing for complex electromechanical systems in the processing industry, and formulate the foundation of system vulnerability analysis and condition prediction, as well as other engineering applications.
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
1994-02-01
desired that the problem to which the design space mapping techniques were applied be easily analyzed, yet provide a design space with realistic complexity...consistent fully stressed solution. 3 DESIGN SPACE MAPPING In order to reduce the computational expense required to optimize design spaces, neural networks...employed in this study. Some of the issues involved in using neural networks to do design space mapping are how to configure the neural network, how much
Transient Oscilliations in Mechanical Systems of Automatic Control with Random Parameters
NASA Astrophysics Data System (ADS)
Royev, B.; Vinokur, A.; Kulikov, G.
2018-04-01
Transient oscillations in mechanical systems of automatic control with random parameters is a relevant but insufficiently studied issue. In this paper, a modified spectral method was applied to investigate the problem. The nature of dynamic processes and the phase portraits are analyzed depending on the amplitude and frequency of external influence. It is evident from the obtained results, that the dynamic phenomena occurring in the systems with random parameters under external influence are complex, and their study requires further investigation.
Advanced Artificial Intelligence Technology Testbed
NASA Technical Reports Server (NTRS)
Anken, Craig S.
1993-01-01
The Advanced Artificial Intelligence Technology Testbed (AAITT) is a laboratory testbed for the design, analysis, integration, evaluation, and exercising of large-scale, complex, software systems, composed of both knowledge-based and conventional components. The AAITT assists its users in the following ways: configuring various problem-solving application suites; observing and measuring the behavior of these applications and the interactions between their constituent modules; gathering and analyzing statistics about the occurrence of key events; and flexibly and quickly altering the interaction of modules within the applications for further study.
Solving Identity Management and Interoperability Problems at Pan-European Level
NASA Astrophysics Data System (ADS)
Sánchez García, Sergio; Gómez Oliva, Ana
In a globalized digital world, it is essential for persons and entities to have a recognized and unambiguous electronic identity that allows them to communicate with one another. The management of this identity by public administrations is an important challenge that becomes even more crucial when interoperability among public administrations of different countries becomes necessary, as persons and entities have different credentials depending on their own national legal frameworks. More specifically, different credentials and legal frameworks cause interoperability problems that prevent reliable access to public services in a cross-border scenarios like today's European Union. Work in this doctoral thesis try to analyze the problem in a carefully detailed manner by studying existing proposals (basically in Europe), proposing improvements in defined architectures and performing practical work to test the viability of solutions. Moreover, this thesis will also address the long-standing security problem of identity delegation, which is especially important in complex and heterogeneous service delivery environments like those mentioned above. This is a position paper.
Time to Completion of Web-Based Physics Problems with Tutoring
Warnakulasooriya, Rasil; Palazzo, David J; Pritchard, David E
2007-01-01
We studied students performing a complex learning task, that of solving multipart physics problems with interactive tutoring on the web. We extracted the rate of completion and fraction completed as a function of time on task by retrospectively analyzing the log of student–tutor interactions. There was a spontaneous division of students into three groups, the central (and largest) group (about 65% of the students) being those who solved the problem in real time after multiple interactions with the tutorial program (primarily receiving feedback to submitted wrong answers and requesting hints). This group displayed a sigmoidal fraction-completed curve as a function of logarithmic time. The sigmoidal shape is qualitatively flatter for problems that do not include hints and wrong-answer responses. We argue that the group of students who respond quickly (about 10% of the students) is obtaining the answer from some outside source. The third group (about 25% of the students) represents those who interrupt their solution, presumably to work offline or to obtain outside help. PMID:17725054
Complexities, Catastrophes and Cities: Emergency Dynamics in Varying Scenarios and Urban Topologies
NASA Astrophysics Data System (ADS)
Narzisi, Giuseppe; Mysore, Venkatesh; Byeon, Jeewoong; Mishra, Bud
Complex Systems are often characterized by agents capable of interacting with each other dynamically, often in non-linear and non-intuitive ways. Trying to characterize their dynamics often results in partial differential equations that are difficult, if not impossible, to solve. A large city or a city-state is an example of such an evolving and self-organizing complex environment that efficiently adapts to different and numerous incremental changes to its social, cultural and technological infrastructure [1]. One powerful technique for analyzing such complex systems is Agent-Based Modeling (ABM) [9], which has seen an increasing number of applications in social science, economics and also biology. The agent-based paradigm facilitates easier transfer of domain specific knowledge into a model. ABM provides a natural way to describe systems in which the overall dynamics can be described as the result of the behavior of populations of autonomous components: agents, with a fixed set of rules based on local information and possible central control. As part of the NYU Center for Catastrophe Preparedness and Response (CCPR1), we have been exploring how ABM can serve as a powerful simulation technique for analyzing large-scale urban disasters. The central problem in Disaster Management is that it is not immediately apparent whether the current emergency plans are robust against such sudden, rare and punctuated catastrophic events.
Medication Management: The Macrocognitive Workflow of Older Adults With Heart Failure
2016-01-01
Background Older adults with chronic disease struggle to manage complex medication regimens. Health information technology has the potential to improve medication management, but only if it is based on a thorough understanding of the complexity of medication management workflow as it occurs in natural settings. Prior research reveals that patient work related to medication management is complex, cognitive, and collaborative. Macrocognitive processes are theorized as how people individually and collaboratively think in complex, adaptive, and messy nonlaboratory settings supported by artifacts. Objective The objective of this research was to describe and analyze the work of medication management by older adults with heart failure, using a macrocognitive workflow framework. Methods We interviewed and observed 61 older patients along with 30 informal caregivers about self-care practices including medication management. Descriptive qualitative content analysis methods were used to develop categories, subcategories, and themes about macrocognitive processes used in medication management workflow. Results We identified 5 high-level macrocognitive processes affecting medication management—sensemaking, planning, coordination, monitoring, and decision making—and 15 subprocesses. Data revealed workflow as occurring in a highly collaborative, fragile system of interacting people, artifacts, time, and space. Process breakdowns were common and patients had little support for macrocognitive workflow from current tools. Conclusions Macrocognitive processes affected medication management performance. Describing and analyzing this performance produced recommendations for technology supporting collaboration and sensemaking, decision making and problem detection, and planning and implementation. PMID:27733331
Medication Management: The Macrocognitive Workflow of Older Adults With Heart Failure.
Mickelson, Robin S; Unertl, Kim M; Holden, Richard J
2016-10-12
Older adults with chronic disease struggle to manage complex medication regimens. Health information technology has the potential to improve medication management, but only if it is based on a thorough understanding of the complexity of medication management workflow as it occurs in natural settings. Prior research reveals that patient work related to medication management is complex, cognitive, and collaborative. Macrocognitive processes are theorized as how people individually and collaboratively think in complex, adaptive, and messy nonlaboratory settings supported by artifacts. The objective of this research was to describe and analyze the work of medication management by older adults with heart failure, using a macrocognitive workflow framework. We interviewed and observed 61 older patients along with 30 informal caregivers about self-care practices including medication management. Descriptive qualitative content analysis methods were used to develop categories, subcategories, and themes about macrocognitive processes used in medication management workflow. We identified 5 high-level macrocognitive processes affecting medication management-sensemaking, planning, coordination, monitoring, and decision making-and 15 subprocesses. Data revealed workflow as occurring in a highly collaborative, fragile system of interacting people, artifacts, time, and space. Process breakdowns were common and patients had little support for macrocognitive workflow from current tools. Macrocognitive processes affected medication management performance. Describing and analyzing this performance produced recommendations for technology supporting collaboration and sensemaking, decision making and problem detection, and planning and implementation.
Solving Complex Problems: A Convergent Approach to Cognitive Load Measurement
ERIC Educational Resources Information Center
Zheng, Robert; Cook, Anne
2012-01-01
The study challenged the current practices in cognitive load measurement involving complex problem solving by manipulating the presence of pictures in multiple rule-based problem-solving situations and examining the cognitive load resulting from both off-line and online measures associated with complex problem solving. Forty-eight participants…
Complex wavefront sensing with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2016-09-01
There are many techniques to achieve basic wavefront sensing tasks in the weak atmospheric turbulence regime. However, in strong and deep turbulence situations, the complexity of a propagating wavefront increases significantly. Typically, beam breakup will happen and various portions of the beam will randomly interfere with each other. Consequently, some conventional techniques for wavefront sensing turn out to be inaccurate and misleading. For example, a Shack-Hartmann sensor will be confused by multi-spot/zero-spot result in some cells. The curvature sensor will be affected by random interference patterns for both the image acquired before the focal plane and the image acquired after the focal plane. We propose the use of a plenoptic sensor to solve complex wavefront sensing problems. In fact, our results show that even for multiple beams (their wavelengths can be the same) passing through the same turbulent channel, the plenoptic sensor can reconstruct the turbulence-induced distortion accurately. In this paper, we will demonstrate the plenoptic mapping principle to analyze and reconstruct the complex wavefront of a distorted laser beam.
Communications network design and costing model technical manual
NASA Technical Reports Server (NTRS)
Logan, K. P.; Somes, S. S.; Clark, C. A.
1983-01-01
This computer model provides the capability for analyzing long-haul trunking networks comprising a set of user-defined cities, traffic conditions, and tariff rates. Networks may consist of all terrestrial connectivity, all satellite connectivity, or a combination of terrestrial and satellite connectivity. Network solutions provide the least-cost routes between all cities, the least-cost network routing configuration, and terrestrial and satellite service cost totals. The CNDC model allows analyses involving three specific FCC-approved tariffs, which are uniquely structured and representative of most existing service connectivity and pricing philosophies. User-defined tariffs that can be variations of these three tariffs are accepted as input to the model and allow considerable flexibility in network problem specification. The resulting model extends the domain of network analysis from traditional fixed link cost (distance-sensitive) problems to more complex problems involving combinations of distance and traffic-sensitive tariffs.
A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty
Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...
2016-11-21
Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less
A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab
Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less
From problem solving to problem definition: scrutinizing the complex nature of clinical practice.
Cristancho, Sayra; Lingard, Lorelei; Regehr, Glenn
2017-02-01
In medical education, we have tended to present problems as being singular, stable, and solvable. Problem solving has, therefore, drawn much of medical education researchers' attention. This focus has been important but it is limited in terms of preparing clinicians to deal with the complexity of the 21st century healthcare system in which they will provide team-based care for patients with complex medical illness. In this paper, we use the Soft Systems Engineering principles to introduce the idea that in complex, team-based situations, problems usually involve divergent views and evolve with multiple solution iterations. As such we need to shift the conversation from (1) problem solving to problem definition, and (2) from a problem definition derived exclusively at the level of the individual to a definition derived at the level of the situation in which the problem is manifested. Embracing such a focus on problem definition will enable us to advocate for novel educational practices that will equip trainees to effectively manage the problems they will encounter in complex, team-based healthcare.
NASA Technical Reports Server (NTRS)
Johnson, Jeffrey R.
2006-01-01
This viewgraph presentation reviews the problems that non-mission researchers have in accessing data to use in their analysis of Mars. The increasing complexity of Mars datasets results in custom software development by instrument teams that is often the only means to visualize and analyze the data. The solutions to the problem are to continue efforts toward synergizing data from multiple missions and making the data, s/w, derived products available in standardized, easily-accessible formats, encourage release of "lite" versions of mission-related software prior to end-of-mission, and planetary image data should be systematically processed in a coordinated way and made available in an easily accessed form. The recommendations of Mars Environmental GIS Workshop are reviewed.
[Pressing problems of labor hygiene and occupational pathology among office workers].
Dudarev, A A; Sorokin, G A
2012-01-01
Northwest public health research center, Ministry of health and social affairs, St.-Petersburg. The article substantiates the conception of "office room", "office worker", estimates the basic diseases and symptoms among office workers (SBS-syndrome, BRI-illnesses, BRS-symptoms). Complex of indoor factors of office environment are analyzed, which influence the health status of personnel--indoor air quality (microclimate, aerosols, chemical, biological pollution, air ionization), external physical factors, ergonomics, intensity and tension of work, psychosocial factors. Comparison of Russian and foreign approaches to the hygienic estimation and rating of these factors was carried out. Owing to inadequacy of Russian hygienic rules to modern requirements, the necessity of working out of a complex of sanitary rules focused particularly on office workers is proved.
Smith, Jackie M; Estefan, Andrew
2014-11-01
Alcohol and substance dependency are complex, problematic phenomena, which are growing worldwide. In particular, drug use and abuse among young people is a significant concern. Although addiction presents as a problem of dependent individuals, families are also profoundly affected by the family member's addiction. In this narrative literature review, we review published research from 1937 to 2014 to capture a narrative and historical perspective of addiction and family. We condense and analyze the experiences of parents with alcohol- and drug-dependent children, to emphasize the need for a more specific, in-depth exploration of mothers' experiences. Such exploration may advance nurses' understandings of individual, familial, and social complexities of parenting an addicted child. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Fang, Fang; Xiao, Yan
2006-12-01
We consider an inhomogeneous optical fiber system described by the generalized cubic complex Ginzburg-Landau (CGL) equation with varying dispersion, nonlinearity, gain (loss), nonlinear gain (absorption) and the effect of spectral limitation. Exact chirped bright and dark soliton-like solutions of the CGL equation were found by using a suitable ansatz. Furthermore, we analyze the features of the solitons and consider the problem of stability of these soliton-like solutions under finite initial perturbations. It is shown by extensive numerical simulations that both bright and dark soliton-like solutions are stable in an inhomogeneous fiber system. Finally, the interaction between two chirped bright and dark soliton-like pulses is investigated numerically.
NASA Technical Reports Server (NTRS)
Vakil, Sanjay S.; Hansman, R. John
2000-01-01
Autoflight systems in the current generation of aircraft have been implicated in several recent incidents and accidents. A contributory aspect to these incidents may be the manner in which aircraft transition between differing behaviours or 'modes.' The current state of aircraft automation was investigated and the incremental development of the autoflight system was tracked through a set of aircraft to gain insight into how these systems developed. This process appears to have resulted in a system without a consistent global representation. In order to evaluate and examine autoflight systems, a 'Hybrid Automation Representation' (HAR) was developed. This representation was used to examine several specific problems known to exist in aircraft systems. Cyclomatic complexity is an analysis tool from computer science which counts the number of linearly independent paths through a program graph. This approach was extended to examine autoflight mode transitions modelled with the HAR. A survey was conducted of pilots to identify those autoflight mode transitions which airline pilots find difficult. The transitions identified in this survey were analyzed using cyclomatic complexity to gain insight into the apparent complexity of the autoflight system from the perspective of the pilot. Mode transitions which had been identified as complex by pilots were found to have a high cyclomatic complexity. Further examination was made into a set of specific problems identified in aircraft: the lack of a consistent representation of automation, concern regarding appropriate feedback from the automation, and the implications of physical limitations on the autoflight systems. Mode transitions involved in changing to and leveling at a new altitude were identified across multiple aircraft by numerous pilots. Where possible, evaluation and verification of the behaviour of these autoflight mode transitions was investigated via aircraft-specific high fidelity simulators. Three solution approaches to concerns regarding autoflight systems, and mode transitions in particular, are presented in this thesis. The first is to use training to modify pilot behaviours, or procedures to work around known problems. The second approach is to mitigate problems by enhancing feedback. The third approach is to modify the process by which automation is designed. The Operator Directed Process forces the consideration and creation of an automation model early in the design process for use as the basis of the software specification and training.
NASA Astrophysics Data System (ADS)
Liu, Y.; Gupta, H.; Wagener, T.; Stewart, S.; Mahmoud, M.; Hartmann, H.; Springer, E.
2007-12-01
Some of the most challenging issues facing contemporary water resources management are those typified by complex coupled human-environmental systems with poorly characterized uncertainties. In other words, major decisions regarding water resources have to be made in the face of substantial uncertainty and complexity. It has been suggested that integrated models can be used to coherently assemble information from a broad set of domains, and can therefore serve as an effective means for tackling the complexity of environmental systems. Further, well-conceived scenarios can effectively inform decision making, particularly when high complexity and poorly characterized uncertainties make the problem intractable via traditional uncertainty analysis methods. This presentation discusses the integrated modeling framework adopted by SAHRA, an NSF Science & Technology Center, to investigate stakeholder-driven water sustainability issues within the semi-arid southwestern US. The multi-disciplinary, multi-resolution modeling framework incorporates a formal scenario approach to analyze the impacts of plausible (albeit uncertain) alternative futures to support adaptive management of water resources systems. Some of the major challenges involved in, and lessons learned from, this effort will be discussed.
NASA Astrophysics Data System (ADS)
Balasis, G.; Daglis, I. A.; Papadimitriou, C.; Kalimeri, M.; Anastasiadis, A.; Eftaxias, K.
2008-12-01
Dynamical complexity detection for output time series of complex systems is one of the foremost problems in physics, biology, engineering, and economic sciences. Especially in magnetospheric physics, accurate detection of the dissimilarity between normal and abnormal states (e.g. pre-storm activity and magnetic storms) can vastly improve space weather diagnosis and, consequently, the mitigation of space weather hazards. Herein, we examine the fractal spectral properties of the Dst data using a wavelet analysis technique. We show that distinct changes in associated scaling parameters occur (i.e., transition from anti- persistent to persistent behavior) as an intense magnetic storm approaches. We then analyze Dst time series by introducing the non-extensive Tsallis entropy, Sq, as an appropriate complexity measure. The Tsallis entropy sensitively shows the complexity dissimilarity among different "physiological" (normal) and "pathological" states (intense magnetic storms). The Tsallis entropy implies the emergence of two distinct patterns: (i) a pattern associated with the intense magnetic storms, which is characterized by a higher degree of organization, and (ii) a pattern associated with normal periods, which is characterized by a lower degree of organization.
Probabilistic data integration and computational complexity
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.; Mosegaard, K.
2016-12-01
Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under-estimation of uncertainty. However, in both examples, one can also analyze the performance of the sampling methods used to solve the data integration problem to indicate the existence of biased information. This can be used actively to avoid biases in the available information and subsequently in the final uncertainty evaluation.
Lessons Learned from Crowdsourcing Complex Engineering Tasks
Kijewski-Correa, Tracy; Thain, Douglas; Kareem, Ahsan; Madey, Gregory
2015-01-01
Crowdsourcing Crowdsourcing is the practice of obtaining needed ideas, services, or content by requesting contributions from a large group of people. Amazon Mechanical Turk is a web marketplace for crowdsourcing microtasks, such as answering surveys and image tagging. We explored the limits of crowdsourcing by using Mechanical Turk for a more complicated task: analysis and creation of wind simulations. Harnessing Crowdworkers for Engineering Our investigation examined the feasibility of using crowdsourcing for complex, highly technical tasks. This was done to determine if the benefits of crowdsourcing could be harnessed to accurately and effectively contribute to solving complex real world engineering problems. Of course, untrained crowds cannot be used as a mere substitute for trained expertise. Rather, we sought to understand how crowd workers can be used as a large pool of labor for a preliminary analysis of complex data. Virtual Wind Tunnel We compared the skill of the anonymous crowd workers from Amazon Mechanical Turk with that of civil engineering graduate students, making a first pass at analyzing wind simulation data. For the first phase, we posted analysis questions to Amazon crowd workers and to two groups of civil engineering graduate students. A second phase of our experiment instructed crowd workers and students to create simulations on our Virtual Wind Tunnel website to solve a more complex task. Conclusions With a sufficiently comprehensive tutorial and compensation similar to typical crowd-sourcing wages, we were able to enlist crowd workers to effectively complete longer, more complex tasks with competence comparable to that of graduate students with more comprehensive, expert-level knowledge. Furthermore, more complex tasks require increased communication with the workers. As tasks become more complex, the employment relationship begins to become more akin to outsourcing than crowdsourcing. Through this investigation, we were able to stretch and explore the limits of crowdsourcing as a tool for solving complex problems. PMID:26383029
Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.
Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas
2010-06-25
Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.
Carvalho, Marilia Sá; Coeli, Claudia Medina; Chor, Dóra; Pinheiro, Rejane Sobrino; da Fonseca, Maria de Jesus Mendes; de Sá Carvalho, Luiz Carlos
2015-01-01
The most common modeling approaches to understanding incidence, prevalence and control of chronic diseases in populations, such as statistical regression models, are limited when it comes to dealing with the complexity of those problems. Those complex adaptive systems have characteristics such as emerging properties, self-organization and feedbacks, which structure the system stability and resistance to changes. Recently, system science approaches have been proposed to deal with the range, complexity, and multifactor nature of those public health problems. In this paper we applied a multilevel systemic approach to create an integrated, coherent, and increasingly precise conceptual framework, capable of aggregating different partial or specialized studies, based on the challenges of the Longitudinal Study of Adult Health – ELSA-Brasil. The failure to control blood pressure found in several of the study's subjects was discussed, based on the proposed model, analyzing different loops, time lags, and feedback that influence this outcome in a population with high educational level, with reasonably good health services access. We were able to identify the internal circularities and cycles that generate the system’s resistance to change. We believe that this study can contribute to propose some new possibilities of the research agenda and to the discussion of integrated actions in the field of public health. PMID:26171854
NASA Astrophysics Data System (ADS)
Vasilkin, Andrey
2018-03-01
The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.
Pickett, Christopher L.; Corb, Benjamin W.; Matthews, C. Robert; Sundquist, Wesley I.; Berg, Jeremy M.
2015-01-01
The US research enterprise is under significant strain due to stagnant funding, an expanding workforce, and complex regulations that increase costs and slow the pace of research. In response, a number of groups have analyzed the problems and offered recommendations for resolving these issues. However, many of these recommendations lacked follow-up implementation, allowing the damage of stagnant funding and outdated policies to persist. Here, we analyze nine reports published since the beginning of 2012 and consolidate over 250 suggestions into eight consensus recommendations made by the majority of the reports. We then propose how to implement these consensus recommendations, and we identify critical issues, such as improving workforce diversity and stakeholder interactions, on which the community has yet to achieve consensus. PMID:26195768
Developing an Approach for Analyzing and Verifying System Communication
NASA Technical Reports Server (NTRS)
Stratton, William C.; Lindvall, Mikael; Ackermann, Chris; Sibol, Deane E.; Godfrey, Sally
2009-01-01
This slide presentation reviews a project for developing an approach for analyzing and verifying the inter system communications. The motivation for the study was that software systems in the aerospace domain are inherently complex, and operate under tight constraints for resources, so that systems of systems must communicate with each other to fulfill the tasks. The systems of systems requires reliable communications. The technical approach was to develop a system, DynSAVE, that detects communication problems among the systems. The project enhanced the proven Software Architecture Visualization and Evaluation (SAVE) tool to create Dynamic SAVE (DynSAVE). The approach monitors and records low level network traffic, converting low level traffic into meaningful messages, and displays the messages in a way the issues can be detected.
Earth System Science Education Modules
NASA Astrophysics Data System (ADS)
Hall, C.; Kaufman, C.; Humphreys, R. R.; Colgan, M. W.
2009-12-01
The College of Charleston is developing several new geoscience-based education modules for integration into the Earth System Science Education Alliance (ESSEA). These three new modules provide opportunities for science and pre-service education students to participate in inquiry-based, data-driven experiences. The three new modules will be discussed in this session. Coastal Crisis is a module that analyzes rapidly changing coastlines and uses technology - remotely sensed data and geographic information systems (GIS) to delineate, understand and monitor changes in coastal environments. The beaches near Charleston, SC are undergoing erosion and therefore are used as examples of rapidly changing coastlines. Students will use real data from NASA, NOAA and other federal agencies in the classroom to study coastal change. Through this case study, learners will acquire remotely sensed images and GIS data sets from online sources, utilize those data sets within Google Earth or other visualization programs, and understand what the data is telling them. Analyzing the data will allow learners to contemplate and make predictions on the impact associated with changing environmental conditions, within the context of a coastal setting. To Drill or Not To Drill is a multidisciplinary problem based module to increase students’ knowledge of problems associated with nonrenewable resource extraction. The controversial topic of drilling in the Arctic National Wildlife Refuge (ANWR) examines whether the economic benefit of the oil extracted from ANWR is worth the social cost of the environmental damage that such extraction may inflict. By attempting to answer this question, learners must balance the interests of preservation with the economic need for oil. The learners are exposed to the difficulties associated with a real world problem that requires trade-off between environmental trust and economic well-being. The Citizen Science module challenges students to translate scientific information into words that are understandable and useful for policy makers and other stakeholders. The inability of scientists to effectively communicate with the public has been highlighted as a major reason for the anti-science attitude of a large segment of the public. This module, unlike other ESSEA modules, addresses this problem by first, investigating a global change environmental problem using Earth System Science methodologies, then developing several solutions to that problem, and finally writing a position paper for the policy makers to use. These three hands-on, real-world modules that engage students in authentic research share similar goals: 1) to use global change data sets to examine controversial environmental problems; 2) to use an earth system science approach to understand the complexity of global problems; and 3) to help students understand the political complexity of environmental problems where there is a clash between economic and ecological problems. The curriculum will meet National Standards in science, geography, math, etc.
Are middle school mathematics teachers able to solve word problems without using variable?
NASA Astrophysics Data System (ADS)
Gökkurt Özdemir, Burçin; Erdem, Emrullah; Örnek, Tuğba; Soylu, Yasin
2018-01-01
Many people consider problem solving as a complex process in which variables such as x, y are used. Problems may not be solved by only using 'variable.' Problem solving can be rationalized and made easier using practical strategies. When especially the development of children at younger ages is considered, it is obvious that mathematics teachers should solve problems through concrete processes. In this context, middle school mathematics teachers' skills to solve word problems without using variables were examined in the current study. Through the case study method, this study was conducted with 60 middle school mathematics teachers who have different professional experiences in five provinces in Turkey. A test consisting of five open-ended word problems was used as the data collection tool. The content analysis technique was used to analyze the data. As a result of the analysis, it was seen that the most of the teachers used trial-and-error strategy or area model as the solution strategy. On the other hand, the teachers who solved the problems using variables such as x, a, n or symbols such as Δ, □, ○, * and who also felt into error by considering these solutions as without variable were also seen in the study.
Synchronization with propagation - The functional differential equations
NASA Astrophysics Data System (ADS)
Rǎsvan, Vladimir
2016-06-01
The structure represented by one or several oscillators couple to a one-dimensional transmission environment (e.g. a vibrating string in the mechanical case or a lossless transmission line in the electrical case) turned to be attractive for the research in the field of complex structures and/or complex behavior. This is due to the fact that such a structure represents some generalization of various interconnection modes with lumped parameters for the oscillators. On the other hand the lossless and distortionless propagation along transmission lines has generated several research in electrical, thermal, hydro and control engineering leading to the association of some functional differential equations to the basic initial boundary value problems. The present research is performed at the crossroad of the aforementioned directions. We shall associate to the starting models some functional differential equations - in most cases of neutral type - and make use of the general theorems for existence and stability of forced oscillations for functional differential equations. The challenges introduced by the analyzed problems for the general theory are emphasized, together with the implication of the results for various applications.
Students' perceptions of clinical teaching and learning strategies: a Pakistani perspective.
Khan, Basnama Ayaz; Ali, Fauziya; Vazir, Nilofar; Barolia, Rubina; Rehan, Seema
2012-01-01
The complexity of the health care environment is increasing with the explosion of technology, coupled with the issues of patients' access, equity, time efficiency, and cost containment. Nursing education must focus on means that enable students to develop the processes of active learning, problem-solving, and critical thinking, in order to enable them to deal with the complexities. This study aims at identifying the nursing students' perceptions about the effectiveness of utilized teaching and learning strategies of clinical education, in improving students' knowledge, skills, and attitudes. A descriptive cross sectional study design was utilized using both qualitative and quantitative approaches. Data were collected from 74 students, using a questionnaire that was developed for the purpose of the study and analyzed using descriptive and non-parametric statistics. The findings revealed that demonstration was the most effective strategy for improving students' skills; reflection, for improving attitudes; and problem based learning and concept map for improving their knowledge. Students' responses to open-ended questions confirmed the effectiveness of these strategies in improving their learning outcomes. Recommendations have been provided based on the findings. Copyright © 2011 Elsevier Ltd. All rights reserved.
Identification of Yeast V-ATPase Mutants by Western Blots Analysis of Whole Cell Lysates
NASA Astrophysics Data System (ADS)
Parra-Belky, Karlett
2002-11-01
A biochemistry laboratory was designed for an undergraduate course to help students better understand the link between molecular engineering and biochemistry. Students identified unknown yeast strains with high specificity using SDS-PAGE and Western blot analysis of whole cell lysates. This problem-solving exercise is a common application of biochemistry in biotechnology research. Three different strains were used: a wild-type and two mutants for the proton pump vacuolar ATPase (V-ATPase). V-ATPases are multisubunit enzymes and the mutants used were deletion mutants; each lacked one structural gene of the complex. After three, three-hour labs, mutant strains were easily identified by the students and distinguished from wild-type cells analyzing the pattern of SDS-PAGE distribution of proteins. Identifying different subunits of one multimeric protein allowed for discussion of the structure and function of this metabolic enzyme, which captured the interest of the students. The experiment can be adapted to other multimeric protein complexes and shows improvement of the described methodology over previous reports, perhaps because the problem and its solution are representative of the type of techniques currently used in research labs.
Analysis of complex decisionmaking processes. [with application to jet engine development
NASA Technical Reports Server (NTRS)
Hill, J. D.; Ollila, R. G.
1978-01-01
The analysis of corporate decisionmaking processes related to major system developments is unusually difficult because of the number of decisionmakers involved in the process and the long development cycle. A method for analyzing such decision processes is developed and illustrated through its application to the analysis of the commercial jet engine development process. The method uses interaction matrices as the key tool for structuring the problem, recording data, and analyzing the data to establish the rank order of the major factors affecting development decisions. In the example, the use of interaction matrices permitted analysts to collect and analyze approximately 50 factors that influenced decisions during the four phases of the development cycle, and to determine the key influencers of decisions at each development phase. The results of this study indicate that the cost of new technology installed on an aircraft is the prime concern of the engine manufacturer.
Toward Modeling the Intrinsic Complexity of Test Problems
ERIC Educational Resources Information Center
Shoufan, Abdulhadi
2017-01-01
The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…
ERIC Educational Resources Information Center
Tang, Hui; Kirk, John; Pienta, Norbert J.
2014-01-01
This paper includes two experiments, one investigating complexity factors in stoichiometry word problems, and the other identifying students' problem-solving protocols by using eye-tracking technology. The word problems used in this study had five different complexity factors, which were randomly assigned by a Web-based tool that we developed. The…
Financial impact of tertiary care in an academic medical center.
Huber, T S; Carlton, L M; O'Hern, D G; Hardt, N S; Keith Ozaki, C; Flynn, T C; Seeger, J M
2000-06-01
To analyze the financial impact of three complex vascular surgical procedures to both an academic hospital and a department of surgery and to examine the potential impact of decreased reimbursements. The cost of providing tertiary care has been implicated as one potential cause of the financial difficulties affecting academic medical centers. Patients undergoing revascularization for chronic mesenteric ischemia, elective thoracoabdominal aortic aneurysm repair, and treatment of infected aortic grafts at the University of Florida were compared with those undergoing elective infrarenal aortic reconstruction and carotid endarterectomy. Hospital costs and profit summaries were obtained from the Clinical Resource Management Office. Departmental costs and profit summary were estimated based on the procedural relative value units (RVUs), the average clinical cost per RVU ($33.12), surgeon charges, and the collection rate for the vascular surgery division (30.2%) obtained from the Faculty Group Practice. Surgeon work effort was analyzed using the procedural work RVUs and the estimated total care time. The analyses were performed for all payors and the subset of Medicare patients, and the potential impact of a 15% reduction in hospital and physician reimbursement was analyzed. Net hospital income was positive for all but one of the tertiary care procedures, but net losses were sustained by the hospital for the mesenteric ischemia and infected aortic graft groups among the Medicare patients. In contrast, the estimated reimbursement to the department of surgery for all payors was insufficient to offset the clinical cost of providing the RVUs for all procedures, and the estimated losses were greater for the Medicare patients alone. The surgeon work effort was dramatically higher for the tertiary care procedures, whereas the reimbursement per work effort was lower. A 15% reduction in reimbursement would result in an estimated net loss to the hospital for each of the tertiary care procedures and would exacerbate the estimated losses to the department. Caring for complex surgical problems is currently profitable to an academic hospital but is associated with marginal losses for a department of surgery. Economic forces resulting from further decreases in hospital and physician reimbursement may limit access to academic medical centers and surgeons for patients with complex surgical problems and may compromise the overall academic mission.
Multigrid Methods for Aerodynamic Problems in Complex Geometries
NASA Technical Reports Server (NTRS)
Caughey, David A.
1995-01-01
Work has been directed at the development of efficient multigrid methods for the solution of aerodynamic problems involving complex geometries, including the development of computational methods for the solution of both inviscid and viscous transonic flow problems. The emphasis is on problems of complex, three-dimensional geometry. The methods developed are based upon finite-volume approximations to both the Euler and the Reynolds-Averaged Navier-Stokes equations. The methods are developed for use on multi-block grids using diagonalized implicit multigrid methods to achieve computational efficiency. The work is focused upon aerodynamic problems involving complex geometries, including advanced engine inlets.
Meshless Method for Simulation of Compressible Flow
NASA Astrophysics Data System (ADS)
Nabizadeh Shahrebabak, Ebrahim
In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow problems. To solve this discontinuity problem, this research study deals with the implementation of a conservative meshless method and its applications in computational fluid dynamics (CFD). One of the most common types of collocating meshless method the RBF-DQ, is used to approximate the spatial derivatives. The issue with meshless methods when dealing with highly convective cases is that they cannot distinguish the influence of fluid flow from upstream or downstream and some methodology is needed to make the scheme stable. Therefore, an upwinding scheme similar to one used in the finite volume method is added to capture steep gradient or shocks. This scheme creates a flexible algorithm within which a wide range of numerical flux schemes, such as those commonly used in the finite volume method, can be employed. In addition, a blended RBF is used to decrease the dissipation ensuing from the use of a low shape parameter. All of these steps are formulated for the Euler equation and a series of test problems used to confirm convergence of the algorithm. The present scheme was first employed on several incompressible benchmarks to validate the framework. The application of this algorithm is illustrated by solving a set of incompressible Navier-Stokes problems. Results from the compressible problem are compared with the exact solution for the flow over a ramp and compared with solutions of finite volume discretization and the discontinuous Galerkin method, both requiring a mesh. The applicability of the algorithm and its robustness are shown to be applied to complex problems.
Research on application of intelligent computation based LUCC model in urbanization process
NASA Astrophysics Data System (ADS)
Chen, Zemin
2007-06-01
Global change study is an interdisciplinary and comprehensive research activity with international cooperation, arising in 1980s, with the largest scopes. The interaction between land use and cover change, as a research field with the crossing of natural science and social science, has become one of core subjects of global change study as well as the front edge and hot point of it. It is necessary to develop research on land use and cover change in urbanization process and build an analog model of urbanization to carry out description, simulation and analysis on dynamic behaviors in urban development change as well as to understand basic characteristics and rules of urbanization process. This has positive practical and theoretical significance for formulating urban and regional sustainable development strategy. The effect of urbanization on land use and cover change is mainly embodied in the change of quantity structure and space structure of urban space, and LUCC model in urbanization process has been an important research subject of urban geography and urban planning. In this paper, based upon previous research achievements, the writer systematically analyzes the research on land use/cover change in urbanization process with the theories of complexity science research and intelligent computation; builds a model for simulating and forecasting dynamic evolution of urban land use and cover change, on the basis of cellular automation model of complexity science research method and multi-agent theory; expands Markov model, traditional CA model and Agent model, introduces complexity science research theory and intelligent computation theory into LUCC research model to build intelligent computation-based LUCC model for analog research on land use and cover change in urbanization research, and performs case research. The concrete contents are as follows: 1. Complexity of LUCC research in urbanization process. Analyze urbanization process in combination with the contents of complexity science research and the conception of complexity feature to reveal the complexity features of LUCC research in urbanization process. Urban space system is a complex economic and cultural phenomenon as well as a social process, is the comprehensive characterization of urban society, economy and culture, and is a complex space system formed by society, economy and nature. It has dissipative structure characteristics, such as opening, dynamics, self-organization, non-balance etc. Traditional model cannot simulate these social, economic and natural driving forces of LUCC including main feedback relation from LUCC to driving force. 2. Establishment of Markov extended model of LUCC analog research in urbanization process. Firstly, use traditional LUCC research model to compute change speed of regional land use through calculating dynamic degree, exploitation degree and consumption degree of land use; use the theory of fuzzy set to rewrite the traditional Markov model, establish structure transfer matrix of land use, forecast and analyze dynamic change and development trend of land use, and present noticeable problems and corresponding measures in urbanization process according to research results. 3. Application of intelligent computation research and complexity science research method in LUCC analog model in urbanization process. On the basis of detailed elaboration of the theory and the model of LUCC research in urbanization process, analyze the problems of existing model used in LUCC research (namely, difficult to resolve many complexity phenomena in complex urban space system), discuss possible structure realization forms of LUCC analog research in combination with the theories of intelligent computation and complexity science research. Perform application analysis on BP artificial neural network and genetic algorithms of intelligent computation and CA model and MAS technology of complexity science research, discuss their theoretical origins and their own characteristics in detail, elaborate the feasibility of them in LUCC analog research, and bring forward improvement methods and measures on existing problems of this kind of model. 4. Establishment of LUCC analog model in urbanization process based on theories of intelligent computation and complexity science. Based on the research on abovementioned BP artificial neural network, genetic algorithms, CA model and multi-agent technology, put forward improvement methods and application assumption towards their expansion on geography, build LUCC analog model in urbanization process based on CA model and Agent model, realize the combination of learning mechanism of BP artificial neural network and fuzzy logic reasoning, express the regulation with explicit formula, and amend the initial regulation through self study; optimize network structure of LUCC analog model and methods and procedures of model parameters with genetic algorithms. In this paper, I introduce research theory and methods of complexity science into LUCC analog research and presents LUCC analog model based upon CA model and MAS theory. Meanwhile, I carry out corresponding expansion on traditional Markov model and introduce the theory of fuzzy set into data screening and parameter amendment of improved model to improve the accuracy and feasibility of Markov model in the research on land use/cover change.
NASA Technical Reports Server (NTRS)
George, Kerry; Wu, Honglu; Willingham, Veronica; Cucinotta, Francis A.
2002-01-01
High-LET radiation is more efficient in producing complex-type chromosome exchanges than sparsely ionizing radiation, and this can potentially be used as a biomarker of radiation quality. To investigate if complex chromosome exchanges are induced by the high-LET component of space radiation exposure, damage was assessed in astronauts' blood lymphocytes before and after long duration missions of 3-4 months. The frequency of simple translocations increased significantly for most of the crewmembers studied. However, there were few complex exchanges detected and only one crewmember had a significant increase after flight. It has been suggested that the yield of complex chromosome damage could be underestimated when analyzing metaphase cells collected at one time point after irradiation, and analysis of chemically-induced PCC may be more accurate since problems with complicated cell-cycle delays are avoided. However, in this case the yields of chromosome damage were similar for metaphase and PCC analysis of astronauts' lymphocytes. It appears that the use of complex-type exchanges as biomarker of radiation quality in vivo after low-dose chronic exposure in mixed radiation fields is hampered by statistical uncertainties.
2015-07-14
AFRL-OSR-VA-TR-2015-0202 Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex...Computational Modeling of Team Problem Solving for Decision Making Under Complex and Dynamic Conditions 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...functioning as they solve complex problems, and propose the means to improve the performance of teams, under changing or adversarial conditions. By
Analyzing symptom data in indoor air questionnaires for primary schools.
Ung-Lanki, S; Lampi, J; Pekkanen, J
2017-09-01
Questionnaires on symptoms and perceived quality of indoor environment are used to assess indoor environment problems, but mainly among adults. The aim of this article was to explore best ways to analyze and report such symptom data, as part of a project to develop a parent-administered indoor air questionnaire for primary school pupils. Indoor air questionnaire with 25 questions on child's symptoms in the last 4 weeks was sent to parents in five primary schools with indoor air problems and in five control schools. About 83% of parents (N=1470) in case schools and 82% (N=805) in control schools returned the questionnaire. In two schools, 351 (52%) parents answered the questionnaire twice with a 2-week interval. Based on prevalence of symptoms, their test-retest repeatability (ICC), and on principal component analysis (PCA), the number of symptoms was reduced to 17 and six symptoms scores were developed. Six variants of these six symptom scores were then formed and their ability to rank schools compared. Four symptom scores (respiratory, lower respiratory, eye, and general symptoms) analyzed dichotomized maintained sufficiently well the diversity of symptom data and captured the between-school differences in symptom prevalence, when compared to more complex and numerous scores. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Statistical mechanics of complex neural systems and high dimensional data
NASA Astrophysics Data System (ADS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-03-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.
Addressing the unmet need for visualizing conditional random fields in biological data
2014-01-01
Background The biological world is replete with phenomena that appear to be ideally modeled and analyzed by one archetypal statistical framework - the Graphical Probabilistic Model (GPM). The structure of GPMs is a uniquely good match for biological problems that range from aligning sequences to modeling the genome-to-phenome relationship. The fundamental questions that GPMs address involve making decisions based on a complex web of interacting factors. Unfortunately, while GPMs ideally fit many questions in biology, they are not an easy solution to apply. Building a GPM is not a simple task for an end user. Moreover, applying GPMs is also impeded by the insidious fact that the “complex web of interacting factors” inherent to a problem might be easy to define and also intractable to compute upon. Discussion We propose that the visualization sciences can contribute to many domains of the bio-sciences, by developing tools to address archetypal representation and user interaction issues in GPMs, and in particular a variety of GPM called a Conditional Random Field(CRF). CRFs bring additional power, and additional complexity, because the CRF dependency network can be conditioned on the query data. Conclusions In this manuscript we examine the shared features of several biological problems that are amenable to modeling with CRFs, highlight the challenges that existing visualization and visual analytics paradigms induce for these data, and document an experimental solution called StickWRLD which, while leaving room for improvement, has been successfully applied in several biological research projects. Software and tutorials are available at http://www.stickwrld.org/ PMID:25000815
Examining the social ecology of a bar-crawl: An exploratory pilot study.
Clapp, John D; Madden, Danielle R; Mooney, Douglas D; Dahlquist, Kristin E
2017-01-01
Many of the problems associated with alcohol occur after a single drinking event (e.g. drink driving, assault). These acute alcohol problems have a huge global impact and account for a large percentage of unintentional and intentional injuries in the world. Nonetheless, alcohol research and preventive interventions rarely focus on drinking at the event-level since drinking events are complex, dynamic, and methodologically challenging to observe. This exploratory study provides an example of how event-level data may be collected, analyzed, and interpreted. The drinking behavior of twenty undergraduate students enrolled at a large Midwestern public university was observed during a single bar crawl event that is organized by students annually. Alcohol use was monitored with transdermal alcohol devices coupled with ecological momentary assessments and geospatial data. "Small N, Big Data" studies have the potential to advance health behavior theory and to guide real-time interventions. However, such studies generate large amounts of within subject data that can be challenging to analyze and present. This study examined how to visually display event-level data and also explored the relationship between some basic indicators and alcohol consumption.
Intelligent control of a planning system for astronaut training.
Ortiz, J; Chen, G
1999-07-01
This work intends to design, analyze and solve, from the systems control perspective, a complex, dynamic, and multiconstrained planning system for generating training plans for crew members of the NASA-led International Space Station. Various intelligent planning systems have been developed within the framework of artificial intelligence. These planning systems generally lack a rigorous mathematical formalism to allow a reliable and flexible methodology for their design, modeling, and performance analysis in a dynamical, time-critical, and multiconstrained environment. Formulating the planning problem in the domain of discrete-event systems under a unified framework such that it can be modeled, designed, and analyzed as a control system will provide a self-contained theory for such planning systems. This will also provide a means to certify various planning systems for operations in the dynamical and complex environments in space. The work presented here completes the design, development, and analysis of an intricate, large-scale, and representative mathematical formulation for intelligent control of a real planning system for Space Station crew training. This planning system has been tested and used at NASA-Johnson Space Center.
Lindahl, Paul A; Moore, Michael J
2016-08-02
Iron, copper, zinc, manganese, cobalt, and molybdenum play important roles in mitochondrial biochemistry, serving to help catalyze reactions in numerous metalloenzymes. These metals are also found in labile "pools" within mitochondria. Although the composition and cellular function of these pools are largely unknown, they are thought to be comprised of nonproteinaceous low-molecular-mass (LMM) metal complexes. Many problems must be solved before these pools can be fully defined, especially problems stemming from the lability of such complexes. This lability arises from inherently weak coordinate bonds between ligands and metals. This is an advantage for catalysis and trafficking, but it makes characterization difficult. The most popular strategy for investigating such pools is to detect them using chelator probes with fluorescent properties that change upon metal coordination. Characterization is limited because of the inevitable destruction of the complexes during their detection. Moreover, probes likely react with more than one type of metal complex, confusing analyses. An alternative approach is to use liquid chromatography (LC) coupled with inductively coupled plasma mass spectrometry (ICP-MS). With help from a previous lab member, the authors recently developed an LC-ICP-MS approach to analyze LMM extracts from yeast and mammalian mitochondria. They detected several metal complexes, including Fe580, Fe1100, Fe1500, Cu5000, Zn1200, Zn1500, Mn1100, Mn2000, Co1200, Co1500, and Mo780 (numbers refer to approximate masses in daltons). Many of these may be used to metalate apo-metalloproteins as they fold inside the organelle. The LC-based approach also has challenges, e.g., in distinguishing artifactual metal complexes from endogenous ones, due to the fact that cells must be disrupted to form extracts before they are passed through chromatography columns prior to analysis. Ultimately, both approaches will be needed to characterize these intriguing complexes and to elucidate their roles in mitochondrial biochemistry.
Students' conceptual performance on synthesis physics problems with varying mathematical complexity
NASA Astrophysics Data System (ADS)
Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan
2017-06-01
A body of research on physics problem solving has focused on single-concept problems. In this study we use "synthesis problems" that involve multiple concepts typically taught in different chapters. We use two types of synthesis problems, sequential and simultaneous synthesis tasks. Sequential problems require a consecutive application of fundamental principles, and simultaneous problems require a concurrent application of pertinent concepts. We explore students' conceptual performance when they solve quantitative synthesis problems with varying mathematical complexity. Conceptual performance refers to the identification, follow-up, and correct application of the pertinent concepts. Mathematical complexity is determined by the type and the number of equations to be manipulated concurrently due to the number of unknowns in each equation. Data were collected from written tasks and individual interviews administered to physics major students (N =179 ) enrolled in a second year mechanics course. The results indicate that mathematical complexity does not impact students' conceptual performance on the sequential tasks. In contrast, for the simultaneous problems, mathematical complexity negatively influences the students' conceptual performance. This difference may be explained by the students' familiarity with and confidence in particular concepts coupled with cognitive load associated with manipulating complex quantitative equations. Another explanation pertains to the type of synthesis problems, either sequential or simultaneous task. The students split the situation presented in the sequential synthesis tasks into segments but treated the situation in the simultaneous synthesis tasks as a single event.
Sensor control of robot arc welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1985-01-01
A basic problem in the application of robots for welding which is how to guide a torch along a weld seam using sensory information was studied. Improvement of the quality and consistency of certain Gas Tungsten Arc welds on the Space Shuttle Main Engine (SSME) that are too complex geometrically for conventional automation and therefore are done by hand was examined. The particular problems associated with space shuttle main egnine (SSME) manufacturing and weld-seam tracking with an emphasis on computer vision methods were analyzed. Special interface software for the MINC computr are developed which will allow it to be used both as a test system to check out the robot interface software and later as a development tool for further investigation of sensory systems to be incorporated in welding procedures.
NASA Technical Reports Server (NTRS)
Schunk, Richard Gregory; Chung, T. J.
2001-01-01
A parallelized version of the Flowfield Dependent Variation (FDV) Method is developed to analyze a problem of current research interest, the flowfield resulting from a triple shock/boundary layer interaction. Such flowfields are often encountered in the inlets of high speed air-breathing vehicles including the NASA Hyper-X research vehicle. In order to resolve the complex shock structure and to provide adequate resolution for boundary layer computations of the convective heat transfer from surfaces inside the inlet, models containing over 500,000 nodes are needed. Efficient parallelization of the computation is essential to achieving results in a timely manner. Results from a parallelization scheme, based upon multi-threading, as implemented on multiple processor supercomputers and workstations is presented.
Tabu Search enhances network robustness under targeted attacks
NASA Astrophysics Data System (ADS)
Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi
2016-03-01
We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.
An integrated radiation physics computer code system.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Harris, D. W.
1972-01-01
An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.
Decision Support System for Determining Scholarship Selection using an Analytical Hierarchy Process
NASA Astrophysics Data System (ADS)
Puspitasari, T. D.; Sari, E. O.; Destarianto, P.; Riskiawan, H. Y.
2018-01-01
Decision Support System is a computer program application that analyzes data and presents it so that users can make decision more easily. Determining Scholarship Selection study case in Senior High School in east Java wasn’t easy. It needed application to solve the problem, to improve the accuracy of targets for prospective beneficiaries of poor students and to speed up the screening process. This research will build system uses the method of Analytical Hierarchy Process (AHP) is a method that solves a complex and unstructured problem into its group, organizes the groups into a hierarchical order, inputs numerical values instead of human perception in comparing relative and ultimately with a synthesis determined elements that have the highest priority. The accuracy system for this research is 90%.
Formative feedback and scaffolding for developing complex problem solving and modelling outcomes
NASA Astrophysics Data System (ADS)
Frank, Brian; Simper, Natalie; Kaupp, James
2018-07-01
This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.
Schmidt, Henk G.; Rikers, Remy M. J. P.; Custers, Eugene J. F. M.; Splinter, Ted A. W.; van Saase, Jan L. C. M.
2010-01-01
Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices’ decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases. PMID:20354726
Mamede, Sílvia; Schmidt, Henk G; Rikers, Remy M J P; Custers, Eugene J F M; Splinter, Ted A W; van Saase, Jan L C M
2010-11-01
Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices' decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases.
Preparing new nurses with complexity science and problem-based learning.
Hodges, Helen F
2011-01-01
Successful nurses function effectively with adaptability, improvability, and interconnectedness, and can see emerging and unpredictable complex problems. Preparing new nurses for complexity requires a significant change in prevalent but dated nursing education models for rising graduates. The science of complexity coupled with problem-based learning and peer review contributes a feasible framework for a constructivist learning environment to examine real-time systems data; explore uncertainty, inherent patterns, and ambiguity; and develop skills for unstructured problem solving. This article describes a pilot study of a problem-based learning strategy guided by principles of complexity science in a community clinical nursing course. Thirty-five senior nursing students participated during a 3-year period. Assessments included peer review, a final project paper, reflection, and a satisfaction survey. Results were higher than expected levels of student satisfaction, increased breadth and analysis of complex data, acknowledgment of community as complex adaptive systems, and overall higher level thinking skills than in previous years. 2011, SLACK Incorporated.
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1978-01-01
A unifying approach, based on a generalization of Pearson's differential equation of statistical theory, is proposed for both the representation of particulate size distribution and the interpretation of radiometric measurements in terms of this parameter. A single-parameter gamma-type distribution is introduced, and it is shown that inversion can only provide the dimensionless parameter, r/ab (where r = particle radius, a = effective radius, b = effective variance), at least when the distribution vanishes at both ends. The basic inversion problem in reconstructing the particle size distribution is analyzed, and the existing methods are reviewed (with emphasis on their capabilities) and classified. A two-step strategy is proposed for simultaneously determining the complex refractive index and reconstructing the size distribution of atmospheric particulates.
NASA Technical Reports Server (NTRS)
Davies, Misty D.; Gundy-Burlet, Karen
2010-01-01
A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.
Automatic high-throughput screening of colloidal crystals using machine learning
NASA Astrophysics Data System (ADS)
Spellings, Matthew; Glotzer, Sharon C.
Recent improvements in hardware and software have united to pose an interesting problem for computational scientists studying self-assembly of particles into crystal structures: while studies covering large swathes of parameter space can be dispatched at once using modern supercomputers and parallel architectures, identifying the different regions of a phase diagram is often a serial task completed by hand. While analytic methods exist to distinguish some simple structures, they can be difficult to apply, and automatic identification of more complex structures is still lacking. In this talk we describe one method to create numerical ``fingerprints'' of local order and use them to analyze a study of complex ordered structures. We can use these methods as first steps toward automatic exploration of parameter space and, more broadly, the strategic design of new materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornton, Peter E; Wang, Weile; Law, Beverly E.
2009-01-01
The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically supportmore » the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.« less
Putting engineering back into protein engineering: bioinformatic approaches to catalyst design.
Gustafsson, Claes; Govindarajan, Sridhar; Minshull, Jeremy
2003-08-01
Complex multivariate engineering problems are commonplace and not unique to protein engineering. Mathematical and data-mining tools developed in other fields of engineering have now been applied to analyze sequence-activity relationships of peptides and proteins and to assist in the design of proteins and peptides with specified properties. Decreasing costs of DNA sequencing in conjunction with methods to quickly synthesize statistically representative sets of proteins allow modern heuristic statistics to be applied to protein engineering. This provides an alternative approach to expensive assays or unreliable high-throughput surrogate screens.
Crossing symmetry in alpha space
NASA Astrophysics Data System (ADS)
Hogervorst, Matthijs; van Rees, Balt C.
2017-11-01
We initiate the study of the conformal bootstrap using Sturm-Liouville theory, specializing to four-point functions in one-dimensional CFTs. We do so by decomposing conformal correlators using a basis of eigenfunctions of the Casimir which are labeled by a complex number α. This leads to a systematic method for computing conformal block decompositions. Analyzing bootstrap equations in alpha space turns crossing symmetry into an eigenvalue problem for an integral operator K. The operator K is closely related to the Wilson transform, and some of its eigenfunctions can be found in closed form.
A service-oriented data access control model
NASA Astrophysics Data System (ADS)
Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali
2017-01-01
The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.
AOIPS water resources data management system
NASA Technical Reports Server (NTRS)
Vanwie, P.
1977-01-01
The text and computer-generated displays used to demonstrate the AOIPS (Atmospheric and Oceanographic Information Processing System) water resources data management system are investigated. The system was developed to assist hydrologists in analyzing the physical processes occurring in watersheds. It was designed to alleviate some of the problems encountered while investigating the complex interrelationships of variables such as land-cover type, topography, precipitation, snow melt, surface runoff, evapotranspiration, and streamflow rates. The system has an interactive image processing capability and a color video display to display results as they are obtained.
Explicitly solvable complex Chebyshev approximation problems related to sine polynomials
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.
ERIC Educational Resources Information Center
Nelson, Tenneisha; Squires, Vicki
2017-01-01
Organizations are faced with solving increasingly complex problems. Addressing these issues requires effective leadership that can facilitate a collaborative problem solving approach where multiple perspectives are leveraged. In this conceptual paper, we critique the effectiveness of earlier leadership models in tackling complex organizational…
Yugandhar, K; Gromiha, M Michael
2014-09-01
Protein-protein interactions are intrinsic to virtually every cellular process. Predicting the binding affinity of protein-protein complexes is one of the challenging problems in computational and molecular biology. In this work, we related sequence features of protein-protein complexes with their binding affinities using machine learning approaches. We set up a database of 185 protein-protein complexes for which the interacting pairs are heterodimers and their experimental binding affinities are available. On the other hand, we have developed a set of 610 features from the sequences of protein complexes and utilized Ranker search method, which is the combination of Attribute evaluator and Ranker method for selecting specific features. We have analyzed several machine learning algorithms to discriminate protein-protein complexes into high and low affinity groups based on their Kd values. Our results showed a 10-fold cross-validation accuracy of 76.1% with the combination of nine features using support vector machines. Further, we observed accuracy of 83.3% on an independent test set of 30 complexes. We suggest that our method would serve as an effective tool for identifying the interacting partners in protein-protein interaction networks and human-pathogen interactions based on the strength of interactions. © 2014 Wiley Periodicals, Inc.
A communication efficient and scalable distributed data mining for the astronomical data
NASA Astrophysics Data System (ADS)
Govada, A.; Sahay, S. K.
2016-07-01
In 2020, ∼60PB of archived data will be accessible to the astronomers. But to analyze such a paramount data will be a challenging task. This is basically due to the computational model used to download the data from complex geographically distributed archives to a central site and then analyzing it in the local systems. Because the data has to be downloaded to the central site, the network BW limitation will be a hindrance for the scientific discoveries. Also analyzing this PB-scale on local machines in a centralized manner is challenging. In this, virtual observatory is a step towards this problem, however, it does not provide the data mining model (Zhang et al., 2004). Adding the distributed data mining layer to the VO can be the solution in which the knowledge can be downloaded by the astronomers instead the raw data and thereafter astronomers can either reconstruct the data back from the downloaded knowledge or use the knowledge directly for further analysis. Therefore, in this paper, we present Distributed Load Balancing Principal Component Analysis for optimally distributing the computation among the available nodes to minimize the transmission cost and downloading cost for the end user. The experimental analysis is done with Fundamental Plane (FP) data, Gadotti data and complex Mfeat data. In terms of transmission cost, our approach performs better than Qi et al. and Yue et al. The analysis shows that with the complex Mfeat data ∼90% downloading cost can be reduced for the end user with the negligible loss in accuracy.
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
A Process Management System for Networked Manufacturing
NASA Astrophysics Data System (ADS)
Liu, Tingting; Wang, Huifen; Liu, Linyan
With the development of computer, communication and network, networked manufacturing has become one of the main manufacturing paradigms in the 21st century. Under the networked manufacturing environment, there exist a large number of cooperative tasks susceptible to alterations, conflicts caused by resources and problems of cost and quality. This increases the complexity of administration. Process management is a technology used to design, enact, control, and analyze networked manufacturing processes. It supports efficient execution, effective management, conflict resolution, cost containment and quality control. In this paper we propose an integrated process management system for networked manufacturing. Requirements of process management are analyzed and architecture of the system is presented. And a process model considering process cost and quality is developed. Finally a case study is provided to explain how the system runs efficiently.
A Design of Product Collaborative Online Configuration Model
NASA Astrophysics Data System (ADS)
Wang, Xiaoguo; Zheng, Jin; Zeng, Qian
According to the actual needs of mass customization, the personalization of product and its collaborative design, the paper analyzes and studies the working mechanism of modular-based product configuration technology and puts forward an information model of modular product family. Combined with case-based reasoning techniques (CBR) and the constraint satisfaction problem solving techniques (CSP), we design and study the algorithm for product configuration, and analyze its time complexity. A car chassis is made as the application object, we provide a prototype system of online configuration. Taking advantage of this system, designers can make appropriate changes on the existing programs in accordance with the demand. This will accelerate all aspects of product development and shorten the product cycle. Also the system will provide a strong technical support for enterprises to improve their market competitiveness.
What Causes Care Coordination Problems? A Case for Microanalysis
Zachary, Wayne; Maulitz, Russell Charles; Zachary, Drew A.
2016-01-01
Introduction: Care coordination (CC) is an important fulcrum for pursuing a range of health care goals. Current research and policy analyses have focused on aggregated data rather than on understanding what happens within individual cases. At the case level, CC emerges as a complex network of communications among providers over time, crossing and recrossing many organizational boundaries. Micro-level analysis is needed to understand where and how CC fails, as well as to identify best practices and root causes of problems. Coordination Process Diagramming: Coordination Process Diagramming (CPD) is a new framework for representing and analyzing CC arcs at the micro level, separating an arc into its participants and roles, communication structure, organizational structures, and transitions of care, all on a common time line. Conclusion: Comparative CPD analysis across a sample of CC arcs identifies common CC problems and potential root causes, showing the potential value of the framework. The analyses also suggest intervention strategies that could be applied to attack the root causes of CC problems, including organizational changes, education and training, and additional health information technology development. PMID:27563685
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
Scheduling multirobot operations in manufacturing by truncated Petri nets
NASA Astrophysics Data System (ADS)
Chen, Qin; Luh, J. Y.
1995-08-01
Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.
Research on allocation efficiency of the daisy chain allocation algorithm
NASA Astrophysics Data System (ADS)
Shi, Jingping; Zhang, Weiguo
2013-03-01
With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.
Classification of time series patterns from complex dynamic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Rao, N.
1998-07-01
An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately,more » the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.« less
Rule-based modeling and simulations of the inner kinetochore structure.
Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar
2013-09-01
Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.
Yousefi, Alireza; Bazrafkan, Leila; Yamani, Nikoo
2015-07-01
The supervision of academic theses at the Universities of Medical Sciences is one of the most important issues with several challenges. The aim of the present study is to discover the nature of problems and challenges of thesis supervision in Iranian universities of medical sciences. The study was conducted with a qualitative method using conventional content analysis approach. Nineteen faculty members, using purposive sampling, and 11 postgraduate medical sciences students (Ph.D students and residents) were selected on the basis of theoretical sampling. The data were gathered through semi-structured interviews and field observations in Shiraz and Isfahan universities of medical sciences from September 2012 to December 2014. The qualitative content analysis was used with a conventional approach to analyze the data. While experiencing the nature of research supervision process, faculties and the students faced some complexities and challenges in the research supervision process. The obtained codes were categorized under 4 themes Based on the characteristics; included "contextual problem", "role ambiguity in thesis supervision", "poor reflection in supervision" and "ethical problems". The result of this study revealed that there is a need for more attention to planning and defining the supervisory, and research supervision. Also, improvement of the quality of supervisor and students relationship must be considered behind the research context improvement in research supervisory area.
NASA Astrophysics Data System (ADS)
Erkol, Şirag; Yücel, Gönenç
In this study, the problem of seed selection is investigated. This problem is mainly treated as an optimization problem, which is proved to be NP-hard. There are several heuristic approaches in the literature which mostly use algorithmic heuristics. These approaches mainly focus on the trade-off between computational complexity and accuracy. Although the accuracy of algorithmic heuristics are high, they also have high computational complexity. Furthermore, in the literature, it is generally assumed that complete information on the structure and features of a network is available, which is not the case in most of the times. For the study, a simulation model is constructed, which is capable of creating networks, performing seed selection heuristics, and simulating diffusion models. Novel metric-based seed selection heuristics that rely only on partial information are proposed and tested using the simulation model. These heuristics use local information available from nodes in the synthetically created networks. The performances of heuristics are comparatively analyzed on three different network types. The results clearly show that the performance of a heuristic depends on the structure of a network. A heuristic to be used should be selected after investigating the properties of the network at hand. More importantly, the approach of partial information provided promising results. In certain cases, selection heuristics that rely only on partial network information perform very close to similar heuristics that require complete network data.
Understanding Wicked Problems: A Key to Advancing Environmental Health Promotion
ERIC Educational Resources Information Center
Kreuter, Marshall W.; De Rosa, Christopher; Howze, Elizabeth H.; Baldwin, Grant T.
2004-01-01
Complex environmental health problems--like air and water pollution, hazardous waste sites, and lead poisoning--are in reality a constellation of linked problems embedded in the fabric of the communities in which they occur. These kinds of complex problems have been characterized by some as "wicked problems" wherein stakeholders may have…
A new complexity measure for time series analysis and classification
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth
2013-07-01
Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).
Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool
NASA Astrophysics Data System (ADS)
Torlapati, Jagadish; Prabhakar Clement, T.
2013-01-01
We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.
Architectural Framework for Addressing Legacy Waste from the Cold War - 13611
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, Gregory A.; Glazner, Christopher G.; Steckley, Sam
We present an architectural framework for the use of a hybrid simulation model of enterprise-wide operations used to develop system-level insight into the U.S. Department of Energy's (DOE) environmental cleanup of legacy nuclear waste at the Savannah River Site. We use this framework for quickly exploring policy and architectural options, analyzing plans, addressing management challenges and developing mitigation strategies for DOE Office of Environmental Management (EM). The socio-technical complexity of EM's mission compels the use of a qualitative approach to complement a more a quantitative discrete event modeling effort. We use this model-based analysis to pinpoint pressure and leverage pointsmore » and develop a shared conceptual understanding of the problem space and platform for communication among stakeholders across the enterprise in a timely manner. This approach affords the opportunity to discuss problems using a unified conceptual perspective and is also general enough that it applies to a broad range of capital investment/production operations problems. (authors)« less
Pupillary response to complex interdependent tasks: A cognitive-load theory perspective.
Mitra, Ritayan; McNeal, Karen S; Bondell, Howard D
2017-10-01
Pupil dilation is known to indicate cognitive load. In this study, we looked at the average pupillary responses of a cohort of 29 undergraduate students during graphical problem solving. Three questions were asked, based on the same graphical input. The questions were interdependent and comprised multiple steps. We propose a novel way of analyzing pupillometry data for such tasks on the basis of eye fixations, a commonly used eyetracking parameter. We found that pupil diameter increased during the solution process. However, pupil diameter did not always reflect the expected cognitive load. This result was studied within a cognitive-load theory model. Higher-performing students showed evidence of germane load and schema creation, indicating use of the interdependent nature of the tasks to inform their problem-solving process. However, lower-performing students did not recognize the interdependent nature of the tasks and solved each problem independently, which was expressed in a markedly different pupillary response pattern. We discuss the import of our findings for instructional design.
Finell, Eerika; Seppälä, Tuija; Suoninen, Eero
2018-07-01
Suffering from a contested illness poses a serious threat to one's identity. We analyzed the rhetorical identity management strategies respondents used when depicting their health problems and lives in the context of observed or suspected indoor air (IA) problems in the workplace. The data consisted of essays collected by the Finnish Literature Society. We used discourse-oriented methods to interpret a variety of language uses in the construction of identity strategies. Six strategies were identified: respondents described themselves as normal and good citizens with strong characters, and as IA sufferers who received acknowledge from others, offered positive meanings to their in-group, and demanded recognition. These identity strategies located on two continua: (a) individual- and collective-level strategies and (b) dissolved and emphasized (sub)category boundaries. The practical conclusion is that professionals should be aware of these complex coping strategies when aiming to interact effectively with people suffering from contested illnesses.
Analysis of Slope Limiters on Irregular Grids
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael J.
2005-01-01
This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Graded meshes in bio-thermal problems with transmission-line modeling method.
Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G
2014-10-01
In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Triantafyllou, A G; Zoras, S; Evagelopoulos, V
2006-11-01
Lignite mining operations and lignite-fired power stations result in major particulate pollution (fly ash and fugitive dust) problems in the areas surrounding these activities. The problem is more complicated, especially, for urban areas located not far from these activities, due to additional contribution from the urban pollution sources. Knowledge of the distribution of airborne particulate matter into size fraction has become an increasing area of focus when examining the effects of particulate pollution. On the other hand, airborne particle concentration measurements are useful in order to assess the air pollution levels based on national and international air quality standards. These measurements are also necessary for developing air pollutants control strategies or for evaluating the effectiveness of these strategies, especially, for long periods. In this study an attempt is made in order to investigate the particle size distribution of fly ash and fugitive dust in a heavy industrialized (mining and power stations operations) area with complex terrain in the northwestern part of Greece. Parallel total suspended particulates (TSP) and particulate matter with an aerodynamic diameter less than 10 microm (PM10) concentrations are analyzed. These measurements gathered from thirteen monitoring stations located in the greater area of interest. Spatial, temporal variation and trend are analyzed over the last seven years. Furthermore, the geographical variation of PM10 - TSP correlation and PM10/TSP ratio are investigated and compared to those in the literature. The analysis has indicated that a complex system of sources and meteorological conditions modulate the particulate pollution of the examined area.
Preparing for Complexity and Wicked Problems through Transformational Learning Approaches
ERIC Educational Resources Information Center
Yukawa, Joyce
2015-01-01
As the information environment becomes increasingly complex and challenging, Library and Information Studies (LIS) education is called upon to nurture innovative leaders capable of managing complex situations and "wicked problems." While disciplinary expertise remains essential, higher levels of mental complexity and adaptive…
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Nielsen, H Bjørn; Almeida, Mathieu; Juncker, Agnieszka Sierakowska; Rasmussen, Simon; Li, Junhua; Sunagawa, Shinichi; Plichta, Damian R; Gautier, Laurent; Pedersen, Anders G; Le Chatelier, Emmanuelle; Pelletier, Eric; Bonde, Ida; Nielsen, Trine; Manichanh, Chaysavanh; Arumugam, Manimozhiyan; Batto, Jean-Michel; Quintanilha Dos Santos, Marcelo B; Blom, Nikolaj; Borruel, Natalia; Burgdorf, Kristoffer S; Boumezbeur, Fouad; Casellas, Francesc; Doré, Joël; Dworzynski, Piotr; Guarner, Francisco; Hansen, Torben; Hildebrand, Falk; Kaas, Rolf S; Kennedy, Sean; Kristiansen, Karsten; Kultima, Jens Roat; Léonard, Pierre; Levenez, Florence; Lund, Ole; Moumen, Bouziane; Le Paslier, Denis; Pons, Nicolas; Pedersen, Oluf; Prifti, Edi; Qin, Junjie; Raes, Jeroen; Sørensen, Søren; Tap, Julien; Tims, Sebastian; Ussery, David W; Yamada, Takuji; Renault, Pierre; Sicheritz-Ponten, Thomas; Bork, Peer; Wang, Jun; Brunak, Søren; Ehrlich, S Dusko
2014-08-01
Most current approaches for analyzing metagenomic data rely on comparisons to reference genomes, but the microbial diversity of many environments extends far beyond what is covered by reference databases. De novo segregation of complex metagenomic data into specific biological entities, such as particular bacterial strains or viruses, remains a largely unsolved problem. Here we present a method, based on binning co-abundant genes across a series of metagenomic samples, that enables comprehensive discovery of new microbial organisms, viruses and co-inherited genetic entities and aids assembly of microbial genomes without the need for reference sequences. We demonstrate the method on data from 396 human gut microbiome samples and identify 7,381 co-abundance gene groups (CAGs), including 741 metagenomic species (MGS). We use these to assemble 238 high-quality microbial genomes and identify affiliations between MGS and hundreds of viruses or genetic entities. Our method provides the means for comprehensive profiling of the diversity within complex metagenomic samples.
Complexity in Nature and Society: Complexity Management in the Age of Globalization
NASA Astrophysics Data System (ADS)
Mainzer, Klaus
The theory of nonlinear complex systems has become a proven problem-solving approach in the natural sciences from cosmic and quantum systems to cellular organisms and the brain. Even in modern engineering science self-organizing systems are developed to manage complex networks and processes. It is now recognized that many of our ecological, social, economic, and political problems are also of a global, complex, and nonlinear nature. What are the laws of sociodynamics? Is there a socio-engineering of nonlinear problem solving? What can we learn from nonlinear dynamics for complexity management in social, economic, financial and political systems? Is self-organization an acceptable strategy to handle the challenges of complexity in firms, institutions and other organizations? It is a main thesis of the talk that nature and society are basically governed by nonlinear and complex information dynamics. How computational is sociodynamics? What can we hope for social, economic and political problem solving in the age of globalization?.
Behavioral and cognitive outcomes for clinical trials in children with neurofibromatosis type 1.
van der Vaart, Thijs; Rietman, André B; Plasschaert, Ellen; Legius, Eric; Elgersma, Ype; Moll, Henriëtte A
2016-01-12
To evaluate the appropriateness of cognitive and behavioral outcome measures in clinical trials in neurofibromatosis type 1 (NF1) by analyzing the degree of deficits compared to reference groups, test-retest reliability, and how scores correlate between outcome measures. Data were analyzed from the Simvastatin for cognitive deficits and behavioral problems in patients with neurofibromatosis type 1 (NF1-SIMCODA) trial, a randomized placebo-controlled trial of simvastatin for cognitive deficits and behavioral problems in children with NF1. Outcome measures were compared with age-specific reference groups to identify domains of dysfunction. Pearson r was computed for before and after measurements within the placebo group to assess test-retest reliability. Principal component analysis was used to identify the internal structure in the outcome data. Strongest mean score deviations from the reference groups were observed for full-scale intelligence (-1.1 SD), Rey Complex Figure Test delayed recall (-2.0 SD), attention problems (-1.2 SD), and social problems (-1.1 SD). Long-term test-retest reliability were excellent for Wechsler scales (r > 0.88), but poor to moderate for other neuropsychological tests (r range 0.52-0.81) and Child Behavioral Checklist subscales (r range 0.40-0.79). The correlation structure revealed 2 strong components in the outcome measures behavior and cognition, with no correlation between these components. Scores on psychosocial quality of life correlate strongly with behavioral problems and less with cognitive deficits. Children with NF1 show distinct deficits in multiple domains. Many outcome measures showed weak test-retest correlations over the 1-year trial period. Cognitive and behavioral outcomes are complementary. This analysis demonstrates the need to include reliable outcome measures on a variety of cognitive and behavioral domains in clinical trials for NF1. © 2015 American Academy of Neurology.
Guidance for modeling causes and effects in environmental problem solving
Armour, Carl L.; Williamson, Samuel C.
1988-01-01
Environmental problems are difficult to solve because their causes and effects are not easily understood. When attempts are made to analyze causes and effects, the principal challenge is organization of information into a framework that is logical, technically defensible, and easy to understand and communicate. When decisionmakers attempt to solve complex problems before an adequate cause and effect analysis is performed there are serious risks. These risks include: greater reliance on subjective reasoning, lessened chance for scoping an effective problem solving approach, impaired recognition of the need for supplemental information to attain understanding, increased chance for making unsound decisions, and lessened chance for gaining approval and financial support for a program/ Cause and effect relationships can be modeled. This type of modeling has been applied to various environmental problems, including cumulative impact assessment (Dames and Moore 1981; Meehan and Weber 1985; Williamson et al. 1987; Raley et al. 1988) and evaluation of effects of quarrying (Sheate 1986). This guidance for field users was written because of the current interest in documenting cause-effect logic as a part of ecological problem solving. Principal literature sources relating to the modeling approach are: Riggs and Inouye (1975a, b), Erickson (1981), and United States Office of Personnel Management (1986).
Probabilities and predictions: modeling the development of scientific problem-solving skills.
Stevens, Ron; Johnson, David F; Soller, Amy
2005-01-01
The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative manner. This article describes the development of probabilistic models of undergraduate student problem solving in molecular genetics that detailed the spectrum of strategies students used when problem solving, and how the strategic approaches evolved with experience. The actions of 776 university sophomore biology majors from three molecular biology lecture courses were recorded and analyzed. Each of six simulations were first grouped by artificial neural network clustering to provide individual performance measures, and then sequences of these performances were probabilistically modeled by hidden Markov modeling to provide measures of progress. The models showed that students with different initial problem-solving abilities choose different strategies. Initial and final strategies varied across different sections of the same course and were not strongly correlated with other achievement measures. In contrast to previous studies, we observed no significant gender differences. We suggest that instructor interventions based on early student performances with these simulations may assist students to recognize effective and efficient problem-solving strategies and enhance learning.
Word problems: a review of linguistic and numerical factors contributing to their difficulty
Daroczy, Gabriella; Wolska, Magdalena; Meurers, Walt Detmar; Nuerk, Hans-Christoph
2015-01-01
Word problems (WPs) belong to the most difficult and complex problem types that pupils encounter during their elementary-level mathematical development. In the classroom setting, they are often viewed as merely arithmetic tasks; however, recent research shows that a number of linguistic verbal components not directly related to arithmetic contribute greatly to their difficulty. In this review, we will distinguish three components of WP difficulty: (i) the linguistic complexity of the problem text itself, (ii) the numerical complexity of the arithmetic problem, and (iii) the relation between the linguistic and numerical complexity of a problem. We will discuss the impact of each of these factors on WP difficulty and motivate the need for a high degree of control in stimuli design for experiments that manipulate WP difficulty for a given age group. PMID:25883575
ERIC Educational Resources Information Center
Blackburn, J. Joey; Robinson, J. Shane; Lamm, Alexa J.
2014-01-01
The purpose of this experimental study was to determine the effects of cognitive style and problem complexity on Oklahoma State University preservice agriculture teachers' (N = 56) ability to solve problems in small gasoline engines. Time to solution was operationalized as problem solving ability. Kirton's Adaption-Innovation Inventory was…
On the Complexity of Delaying an Adversary’s Project
2005-01-01
interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We
Better Spectrometers, Beautiful Spectra and Confusion for All
NASA Technical Reports Server (NTRS)
Pearson, J. C.; Brauer, C. S.; Drouin, B. J.; Yu, S.
2009-01-01
The confluence of enormous improvements in submillimeter receivers and the development of powerful large scale observatories is about to force astrophysics and the sciences that support it to develop novel approaches for interpretation of data. The historical method of observing one or two lines and carefully analyzing them in the context of a simple model is now only applicable for distant objects where only a few lines are strong enough to be observable. Modern observatories collect many GHz of high signal-to-noise spectra in a single observation and in many cases, at sufficiently high spatial resolution to start resolving chemically distinct regions. The observatories planned for the near future and the inevitable upgrades of existing facilities will make large spectral data sets the rule rather than the exception in many areas of molecular astrophysics. The methodology and organization required to fully extract the available information and interpret these beautiful spectra represents a challenge to submillimeter astrophysics similar in magnitude to the last few decades of effort in improving receivers. The quality and abundance of spectra effectively prevents line-by-line analysis from being a time efficient proposition, however, global analysis of complex spectra is a science in its infancy. Spectroscopy at several other wavelengths have developed a number of techniques to analyze complex spectra, which can provide a great deal of guidance to the molecular astrophysics community on how to attack the complex spectrum problem. Ultimately, the challenge is one of organization, similar to building observatories, requiring teams of specialists combining their knowledge of dynamical, structural, chemical and radiative models with detailed knowledge in molecular physics and gas and grain surface chemistry to extract and exploit the enormous information content of complex spectra. This paper presents a spectroscopists view of the necessary elements in a tool for complex spectral analysis.
Solving the Inverse-Square Problem with Complex Variables
ERIC Educational Resources Information Center
Gauthier, N.
2005-01-01
The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…
Fu, Wenjiang J.; Stromberg, Arnold J.; Viele, Kert; Carroll, Raymond J.; Wu, Guoyao
2009-01-01
Over the past two decades, there have been revolutionary developments in life science technologies characterized by high throughput, high efficiency, and rapid computation. Nutritionists now have the advanced methodologies for the analysis of DNA, RNA, protein, low-molecular-weight metabolites, as well as access to bioinformatics databases. Statistics, which can be defined as the process of making scientific inferences from data that contain variability, has historically played an integral role in advancing nutritional sciences. Currently, in the era of systems biology, statistics has become an increasingly important tool to quantitatively analyze information about biological macromolecules. This article describes general terms used in statistical analysis of large, complex experimental data. These terms include experimental design, power analysis, sample size calculation, and experimental errors (type I and II errors) for nutritional studies at population, tissue, cellular, and molecular levels. In addition, we highlighted various sources of experimental variations in studies involving microarray gene expression, real-time polymerase chain reaction, proteomics, and other bioinformatics technologies. Moreover, we provided guidelines for nutritionists and other biomedical scientists to plan and conduct studies and to analyze the complex data. Appropriate statistical analyses are expected to make an important contribution to solving major nutrition-associated problems in humans and animals (including obesity, diabetes, cardiovascular disease, cancer, ageing, and intrauterine fetal retardation). PMID:20233650
Distributed Trajectory Flexibility Preservation for Traffic Complexity Mitigation
NASA Technical Reports Server (NTRS)
Idris, Husni; Wing, David; Delahaye, Daniel
2009-01-01
The growing demand for air travel is increasing the need for mitigation of air traffic congestion and complexity problems, which are already at high levels. At the same time new information and automation technologies are enabling the distribution of tasks and decisions from the service providers to the users of the air traffic system, with potential capacity and cost benefits. This distribution of tasks and decisions raises the concern that independent user actions will decrease the predictability and increase the complexity of the traffic system, hence inhibiting and possibly reversing any potential benefits. In answer to this concern, the authors propose the introduction of decision-making metrics for preserving user trajectory flexibility. The hypothesis is that such metrics will make user actions naturally mitigate traffic complexity. In this paper, the impact of using these metrics on traffic complexity is investigated. The scenarios analyzed include aircraft in en route airspace with each aircraft meeting a required time of arrival in a one-hour time horizon while mitigating the risk of loss of separation with the other aircraft, thus preserving its trajectory flexibility. The experiments showed promising results in that the individual trajectory flexibility preservation induced self-separation and self-organization effects in the overall traffic situation. The effects were quantified using traffic complexity metrics based on Lyapunov exponents and traffic proximity.
NASA Astrophysics Data System (ADS)
Li, Yuanyuan; Jin, Suoqin; Lei, Lei; Pan, Zishu; Zou, Xiufen
2015-03-01
The early diagnosis and investigation of the pathogenic mechanisms of complex diseases are the most challenging problems in the fields of biology and medicine. Network-based systems biology is an important technique for the study of complex diseases. The present study constructed dynamic protein-protein interaction (PPI) networks to identify dynamical network biomarkers (DNBs) and analyze the underlying mechanisms of complex diseases from a systems level. We developed a model-based framework for the construction of a series of time-sequenced networks by integrating high-throughput gene expression data into PPI data. By combining the dynamic networks and molecular modules, we identified significant DNBs for four complex diseases, including influenza caused by either H3N2 or H1N1, acute lung injury and type 2 diabetes mellitus, which can serve as warning signals for disease deterioration. Function and pathway analyses revealed that the identified DNBs were significantly enriched during key events in early disease development. Correlation and information flow analyses revealed that DNBs effectively discriminated between different disease processes and that dysfunctional regulation and disproportional information flow may contribute to the increased disease severity. This study provides a general paradigm for revealing the deterioration mechanisms of complex diseases and offers new insights into their early diagnoses.
From least squares to multilevel modeling: A graphical introduction to Bayesian inference
NASA Astrophysics Data System (ADS)
Loredo, Thomas J.
2016-01-01
This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.
Heat transfer evaluation in a plasma core reactor
NASA Technical Reports Server (NTRS)
Smith, D. E.; Smith, T. M.; Stoenescu, M. L.
1976-01-01
Numerical evaluations of heat transfer in a fissioning uranium plasma core reactor cavity, operating with seeded hydrogen propellant, was performed. A two-dimensional analysis is based on an assumed flow pattern and cavity wall heat exchange rate. Various iterative schemes were required by the nature of the radiative field and by the solid seed vaporization. Approximate formulations of the radiative heat flux are generally used, due to the complexity of the solution of a rigorously formulated problem. The present work analyzes the sensitivity of the results with respect to approximations of the radiative field, geometry, seed vaporization coefficients and flow pattern. The results present temperature, heat flux, density and optical depth distributions in the reactor cavity, acceptable simplifying assumptions, and iterative schemes. The present calculations, performed in cartesian and spherical coordinates, are applicable to any most general heat transfer problem.
Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore
Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter
2013-01-01
A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796
Reducing assembly complexity of microbial genomes with single-molecule sequencing.
Koren, Sergey; Harhay, Gregory P; Smith, Timothy P L; Bono, James L; Harhay, Dayna M; Mcvey, Scott D; Radune, Diana; Bergman, Nicholas H; Phillippy, Adam M
2013-01-01
The short reads output by first- and second-generation DNA sequencing instruments cannot completely reconstruct microbial chromosomes. Therefore, most genomes have been left unfinished due to the significant resources required to manually close gaps in draft assemblies. Third-generation, single-molecule sequencing addresses this problem by greatly increasing sequencing read length, which simplifies the assembly problem. To measure the benefit of single-molecule sequencing on microbial genome assembly, we sequenced and assembled the genomes of six bacteria and analyzed the repeat complexity of 2,267 complete bacteria and archaea. Our results indicate that the majority of known bacterial and archaeal genomes can be assembled without gaps, at finished-grade quality, using a single PacBio RS sequencing library. These single-library assemblies are also more accurate than typical short-read assemblies and hybrid assemblies of short and long reads. Automated assembly of long, single-molecule sequencing data reduces the cost of microbial finishing to $1,000 for most genomes, and future advances in this technology are expected to drive the cost lower. This is expected to increase the number of completed genomes, improve the quality of microbial genome databases, and enable high-fidelity, population-scale studies of pan-genomes and chromosomal organization.
Nonlinear analysis of dynamic signature
NASA Astrophysics Data System (ADS)
Rashidi, S.; Fallah, A.; Towhidkhah, F.
2013-12-01
Signature is a long trained motor skill resulting in well combination of segments like strokes and loops. It is a physical manifestation of complex motor processes. The problem, generally stated, is that how relative simplicity in behavior emerges from considerable complexity of perception-action system that produces behavior within an infinitely variable biomechanical and environmental context. To solve this problem, we present evidences which indicate that motor control dynamic in signing process is a chaotic process. This chaotic dynamic may explain a richer array of time series behavior in motor skill of signature. Nonlinear analysis is a powerful approach and suitable tool which seeks for characterizing dynamical systems through concepts such as fractal dimension and Lyapunov exponent. As a result, they can be analyzed in both horizontal and vertical for time series of position and velocity. We observed from the results that noninteger values for the correlation dimension indicates low dimensional deterministic dynamics. This result could be confirmed by using surrogate data tests. We have also used time series to calculate the largest Lyapunov exponent and obtain a positive value. These results constitute significant evidence that signature data are outcome of chaos in a nonlinear dynamical system of motor control.
Alignment and integration of complex networks by hypergraph-based spectral clustering
NASA Astrophysics Data System (ADS)
Michoel, Tom; Nachtergaele, Bruno
2012-11-01
Complex networks possess a rich, multiscale structure reflecting the dynamical and functional organization of the systems they model. Often there is a need to analyze multiple networks simultaneously, to model a system by more than one type of interaction, or to go beyond simple pairwise interactions, but currently there is a lack of theoretical and computational methods to address these problems. Here we introduce a framework for clustering and community detection in such systems using hypergraph representations. Our main result is a generalization of the Perron-Frobenius theorem from which we derive spectral clustering algorithms for directed and undirected hypergraphs. We illustrate our approach with applications for local and global alignment of protein-protein interaction networks between multiple species, for tripartite community detection in folksonomies, and for detecting clusters of overlapping regulatory pathways in directed networks.
Alignment and integration of complex networks by hypergraph-based spectral clustering.
Michoel, Tom; Nachtergaele, Bruno
2012-11-01
Complex networks possess a rich, multiscale structure reflecting the dynamical and functional organization of the systems they model. Often there is a need to analyze multiple networks simultaneously, to model a system by more than one type of interaction, or to go beyond simple pairwise interactions, but currently there is a lack of theoretical and computational methods to address these problems. Here we introduce a framework for clustering and community detection in such systems using hypergraph representations. Our main result is a generalization of the Perron-Frobenius theorem from which we derive spectral clustering algorithms for directed and undirected hypergraphs. We illustrate our approach with applications for local and global alignment of protein-protein interaction networks between multiple species, for tripartite community detection in folksonomies, and for detecting clusters of overlapping regulatory pathways in directed networks.
An entropic barriers diffusion theory of decision-making in multiple alternative tasks
Sigman, Mariano; Cecchi, Guillermo A.
2018-01-01
We present a theory of decision-making in the presence of multiple choices that departs from traditional approaches by explicitly incorporating entropic barriers in a stochastic search process. We analyze response time data from an on-line repository of 15 million blitz chess games, and show that our model fits not just the mean and variance, but the entire response time distribution (over several response-time orders of magnitude) at every stage of the game. We apply the model to show that (a) higher cognitive expertise corresponds to the exploration of more complex solution spaces, and (b) reaction times of users at an on-line buying website can be similarly explained. Our model can be seen as a synergy between diffusion models used to model simple two-choice decision-making and planning agents in complex problem solving. PMID:29499036
An Analysis of Performance Enhancement Techniques for Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)
2002-01-01
The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
ERIC Educational Resources Information Center
Eseryel, Deniz; Ge, Xun; Ifenthaler, Dirk; Law, Victor
2011-01-01
Following a design-based research framework, this article reports two empirical studies with an educational MMOG, called "McLarin's Adventures," on facilitating 9th-grade students' complex problem-solving skill acquisition in interdisciplinary STEM education. The article discusses the nature of complex and ill-structured problem solving…
ERIC Educational Resources Information Center
Goode, Natassia; Beckmann, Jens F.
2010-01-01
This study investigates the relationships between structural knowledge, control performance and fluid intelligence in a complex problem solving (CPS) task. 75 participants received either complete, partial or no information regarding the underlying structure of a complex problem solving task, and controlled the task to reach specific goals.…
Pre-Service Teachers' Free and Structured Mathematical Problem Posing
ERIC Educational Resources Information Center
Silber, Steven; Cai, Jinfa
2017-01-01
This exploratory study examined how pre-service teachers (PSTs) pose mathematical problems for free and structured mathematical problem-posing conditions. It was hypothesized that PSTs would pose more complex mathematical problems under structured posing conditions, with increasing levels of complexity, than PSTs would pose under free posing…
A restricted Steiner tree problem is solved by Geometric Method II
NASA Astrophysics Data System (ADS)
Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu
2013-03-01
The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.
ERIC Educational Resources Information Center
Scherer, Ronny; Tiemann, Rudiger
2012-01-01
The ability to solve complex scientific problems is regarded as one of the key competencies in science education. Until now, research on problem solving focused on the relationship between analytical and complex problem solving, but rarely took into account the structure of problem-solving processes and metacognitive aspects. This paper,…
ERIC Educational Resources Information Center
Blackburn, J. Joey; Robinson, J. Shane
2016-01-01
The purpose of this experimental study was to assess the effects of cognitive style, problem complexity, and hypothesis generation on the problem solving ability of school-based agricultural education students. Problem solving ability was defined as time to solution. Kirton's Adaption-Innovation Inventory was employed to assess students' cognitive…
Multichromosomal median and halving problems under different genomic distances
Tannier, Eric; Zheng, Chunfang; Sankoff, David
2009-01-01
Background Genome median and genome halving are combinatorial optimization problems that aim at reconstructing ancestral genomes as well as the evolutionary events leading from the ancestor to extant species. Exploring complexity issues is a first step towards devising efficient algorithms. The complexity of the median problem for unichromosomal genomes (permutations) has been settled for both the breakpoint distance and the reversal distance. Although the multichromosomal case has often been assumed to be a simple generalization of the unichromosomal case, it is also a relaxation so that complexity in this context does not follow from existing results, and is open for all distances. Results We settle here the complexity of several genome median and halving problems, including a surprising polynomial result for the breakpoint median and guided halving problems in genomes with circular and linear chromosomes, showing that the multichromosomal problem is actually easier than the unichromosomal problem. Still other variants of these problems are NP-complete, including the DCJ double distance problem, previously mentioned as an open question. We list the remaining open problems. Conclusion This theoretical study clears up a wide swathe of the algorithmical study of genome rearrangements with multiple multichromosomal genomes. PMID:19386099
Dievler, A; Pappas, G
1999-04-01
This paper explores how social class and race affect the public health policy-making process in an urban area. Ethnographic methods were used to collect and analyze information about HIV/AIDS and tuberculosis policy-making by the Washington, DC Commission of Public Health, Kingdon's conceptual model of policy making was used to analyze and understand the process. The problems of HIV/AIDS and tuberculosis in the district have important social class dimensions that were not always made explicit, but were instead defined in terms of 'race' and 'place'. Social class considerations and racial politics shaped what policies were developed or not developed and implemented successfully or failed. This study, which has national and international implications, concludes that there is a need to improve our understanding of the complex social dimensions of public health problems; there needs to be more consideration of the politics of strategy formulation and how issues of social class and race affect this process; and public health needs to strengthen its constituency in order to build support for the successful development and implementation of policy.
Machine Learning Approaches in Cardiovascular Imaging.
Henglin, Mir; Stein, Gillian; Hushcha, Pavel V; Snoek, Jasper; Wiltschko, Alexander B; Cheng, Susan
2017-10-01
Cardiovascular imaging technologies continue to increase in their capacity to capture and store large quantities of data. Modern computational methods, developed in the field of machine learning, offer new approaches to leveraging the growing volume of imaging data available for analyses. Machine learning methods can now address data-related problems ranging from simple analytic queries of existing measurement data to the more complex challenges involved in analyzing raw images. To date, machine learning has been used in 2 broad and highly interconnected areas: automation of tasks that might otherwise be performed by a human and generation of clinically important new knowledge. Most cardiovascular imaging studies have focused on task-oriented problems, but more studies involving algorithms aimed at generating new clinical insights are emerging. Continued expansion in the size and dimensionality of cardiovascular imaging databases is driving strong interest in applying powerful deep learning methods, in particular, to analyze these data. Overall, the most effective approaches will require an investment in the resources needed to appropriately prepare such large data sets for analyses. Notwithstanding current technical and logistical challenges, machine learning and especially deep learning methods have much to offer and will substantially impact the future practice and science of cardiovascular imaging. © 2017 American Heart Association, Inc.
The application of artificial intelligence in the optimal design of mechanical systems
NASA Astrophysics Data System (ADS)
Poteralski, A.; Szczepanik, M.
2016-11-01
The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.
Analysis of mesoscopic attenuation in gas-hydrate bearing sediments
NASA Astrophysics Data System (ADS)
Rubino, J. G.; Ravazzoli, C. L.; Santos, J. E.
2007-05-01
Several authors have shown that seismic wave attenuation combined with seismic velocities constitute a useful geophysical tool to infer the presence and amounts of gas hydrates lying in the pore space of the sediments. However, it is still not fully understood the loss mechanism associated to the presence of the hydrates, and most of the works dealing with this problem focuse on macroscopic fluid flow, friction between hydrates and sediment matrix and squirt flow. It is well known that an important cause of the attenuation levels observed in seismic data from some sedimentary regions is the mesoscopic loss mechanism, caused by heterogeneities in the rock and fluid properties greater than the pore size but much smaller than the wavelengths. In order to analyze this effect in heterogeneous gas-hydrate bearing sediments, we developed a finite-element procedure to obtain the effective complex modulus of an heterogeneous porous material containing gas hydrates in its pore space using compressibility tests at different oscillatory frequencies in the seismic range. The complex modulus were obtained by solving Biot's equations of motion in the space-frequency domain with appropriate boundary conditions representing a gedanken laboratory experiment measuring the complex volume change of a representative sample of heterogeneous bulk material. This complex modulus in turn allowed us to obtain the corresponding effective phase velocity and quality factor for each frequency and spatial gas hydrate distribution. Physical parameters taken from the Mallik 5L-38 Gas Hydrate Research well (Mackenzie Delta, Canada) were used to analyze the mesoscopic effects in realistic hydrated sediments.
AI mass spectrometers for space shuttle health monitoring
NASA Technical Reports Server (NTRS)
Adams, F. W.
1991-01-01
The facility Hazardous Gas Detection System (HGDS) at Kennedy Space Center (KSC) is a mass spectrometer based gas analyzer. Two instruments make up the HGDS, which is installed in a prime/backup arrangement, with the option of using both analyzers on the same sample line, or on two different lines simultaneously. It is used for monitoring the Shuttle during fuel loading, countdown, and drainback, if necessary. The use of complex instruments, operated over many shifts, has caused problems in tracking the status of the ground support equipment (GSE) and the vehicle. A requirement for overall system reliability has been a major force in the development of Shuttle GSE, and is the ultimate driver in the choice to pursue artificial intelligence (AI) techniques for Shuttle and Advanced Launch System (ALS) mass spectrometer systems. Shuttle applications of AI are detailed.
Problem decomposition by mutual information and force-based clustering
NASA Astrophysics Data System (ADS)
Otero, Richard Edward
The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.
Writing testable software requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knirk, D.
1997-11-01
This tutorial identifies common problems in analyzing requirements in the problem and constructing a written specification of what the software is to do. It deals with two main problem areas: identifying and describing problem requirements, and analyzing and describing behavior specifications.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Integrated Safety Analysis Teams
NASA Technical Reports Server (NTRS)
Wetherholt, Jonathan C.
2008-01-01
Today's complex systems require understanding beyond one person s capability to comprehend. Each system requires a team to divide the system into understandable subsystems which can then be analyzed with an Integrated Hazard Analysis. The team must have both specific experiences and diversity of experience. Safety experience and system understanding are not always manifested in one individual. Group dynamics make the difference between success and failure as well as the difference between a difficult task and a rewarding experience. There are examples in the news which demonstrate the need to connect the pieces of a system into a complete picture. The Columbia disaster is now a standard example of a low consequence hazard in one part of the system; the External Tank is a catastrophic hazard cause for a companion subsystem, the Space Shuttle Orbiter. The interaction between the hardware, the manufacturing process, the handling, and the operations contributed to the problem. Each of these had analysis performed, but who constituted the team which integrated this analysis together? This paper will explore some of the methods used for dividing up a complex system; and how one integration team has analyzed the parts. How this analysis has been documented in one particular launch space vehicle case will also be discussed.
Evaluation of particle-based flow characteristics using novel Eulerian indices
NASA Astrophysics Data System (ADS)
Cho, Youngmoon; Kang, Seongwon
2017-11-01
The main objective of this study is to evaluate flow characteristics in complex particle-laden flows efficiently using novel Eulerian indices. For flows with a large number of particles, a Lagrangian approach leads to accurate yet inefficient prediction in many engineering problems. We propose a technique based on Eulerian transport equation and ensemble-averaged particle properties, which enables efficient evaluation of various particle-based flow characteristics such as the residence time, accumulated travel distance, mean radial force, etc. As a verification study, we compare the developed Eulerian indices with those using Lagrangian approaches for laminar flows with and without a swirling motion and density ratio. The results show satisfactory agreement between two approaches. The accumulated travel distance is modified to analyze flow motions inside IC engines and, when applied to flow bench cases, it can predict swirling and tumbling motions successfully. For flows inside a cyclone separator, the mean radial force is applied to predict the separation of particles and is shown to have a high correlation to the separation efficiency for various working conditions. In conclusion, the proposed Eulerian indices are shown to be useful tools to analyze complex particle-based flow characteristics. Corresponding author.
Research on distributed virtual reality system in electronic commerce
NASA Astrophysics Data System (ADS)
Xue, Qiang; Wang, Jiening; Sun, Jizhou
2004-03-01
In this paper, Distributed Virtual Reality (DVR) technology applied in Electronical Commerce (EC) is discussed. DVR has the capability of providing a new means for human being to recognize, analyze and resolve the large scale, complex problems, which makes it develop quickly in EC fields. The technology of CSCW (Computer Supported Cooperative Work) and middleware is introduced into the development of EC-DVR system to meet the need of a platform which can provide the necessary cooperation and communication services to avoid developing the basic module repeatedly. Finally, the paper gives a platform structure of EC-DVR system.
The Methods of Cognitive Visualization for the Astronomical Databases Analyzing Tools Development
NASA Astrophysics Data System (ADS)
Vitkovskiy, V.; Gorohov, V.
2008-08-01
There are two kinds of computer graphics: the illustrative one and the cognitive one. Appropriate the cognitive pictures not only make evident and clear the sense of complex and difficult scientific concepts, but promote, --- and not so very rarely, --- a birth of a new knowledge. On the basis of the cognitive graphics concept, we worked out the SW-system for visualization and analysis. It allows to train and to aggravate intuition of researcher, to raise his interest and motivation to the creative, scientific cognition, to realize process of dialogue with the very problems simultaneously.
Experiments in Natural and Synthetic Dental Materials: A Mouthful of Experiments
NASA Technical Reports Server (NTRS)
Masi, James V.
1996-01-01
The objectives of these experiments are to show that the area of biomaterials, especially dental materials (natural and synthetic), contain all of the elements of good and bad design, with the caveat that a person's health is directly involved. The students learn the process of designing materials for the complex interactions in the oral cavity, analyze those already used, and suggest possible solutions to the problems involved with present technology. The N.I.O.S.H. Handbook is used extensively by the students and judgement calls are made, even without extensive biology education.
Numerical simulation of swept-wing flows
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1991-01-01
Efforts of the last six months to computationally model the transition process characteristics of flow over swept wings are described. Specifically, the crossflow instability and crossflow/Tollmien-Schlichting wave interactions are analyzed through the numerical solution of the full 3D Navier-Stokes equations including unsteadiness, curvature, and sweep. This approach is chosen because of the complexity of the problem and because it appears that linear stability theory is insufficient to explain the discrepancies between different experiments and between theory and experiment. The leading edge region of a swept wing is considered in a 3D spatial simulation with random disturbances as the initial conditions.
Model for the computation of self-motion in biological systems
NASA Technical Reports Server (NTRS)
Perrone, John A.
1992-01-01
A technique is presented by which direction- and speed-tuned cells, such as those commonly found in the middle temporal region of the primate brain, can be utilized to analyze the patterns of retinal image motion that are generated during observer movement through the environment. The developed model determines heading by finding the peak response in a population of detectors or neurons each tuned to a particular heading direction. It is suggested that a complex interaction of multiple cell networks is required for the solution of the self-motion problem in the primate brain.
Mikš, Antonín; Novák, Pavel
2018-05-10
In this article, we analyze the problem of the paraxial design of an active optical element with variable focal length, which maintains the positions of its principal planes fixed during the change of its optical power. Such optical elements are important in the process of design of complex optical systems (e.g., zoom systems), where the fixed position of principal planes during the change of optical power is essential for the design process. The proposed solution is based on the generalized membrane tunable-focus fluidic lens with several membrane surfaces.
Automatic acquisition of motion trajectories: tracking hockey players
NASA Astrophysics Data System (ADS)
Okuma, Kenji; Little, James J.; Lowe, David
2003-12-01
Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.
Cape, John; Morris, Elena; Burd, Mary; Buszewicz, Marta
2008-01-01
Background How GPs understand mental health problems determines their treatment choices; however, measures describing GPs' thinking about such problems are not currently available. Aim To develop a measure of the complexity of GP explanations of common mental health problems and to pilot its reliability and validity. Design of study A qualitative development of the measure, followed by inter-rater reliability and validation pilot studies. Setting General practices in North London. Method Vignettes of simulated consultations with patients with mental health problems were videotaped, and an anchored measure of complexity of psychosocial explanation in response to these vignettes was developed. Six GPs, four psychologists, and two lay people viewed the vignettes. Their responses were rated for complexity, both using the anchored measure and independently by two experts in primary care mental health. In a second reliability and revalidation study, responses of 50 GPs to two vignettes were rated for complexity. The GPs also completed a questionnaire to determine their interest and training in mental health, and they completed the Depression Attitudes Questionnaire. Results Inter-rater reliability of the measure of complexity of explanation in both pilot studies was satisfactory (intraclass correlation coefficient = 0.78 and 0.72). The measure correlated with expert opinion as to what constitutes a complex explanation, and the responses of psychologists, GPs, and lay people differed in measured complexity. GPs with higher complexity scores had greater interest, more training in mental health, and more positive attitudes to depression. Conclusion Results suggest that the complexity of GPs' psychosocial explanations about common mental health problems can be reliably and validly assessed by this new standardised measure. PMID:18505616
Clinical Problem Analysis (CPA): A Systematic Approach To Teaching Complex Medical Problem Solving.
ERIC Educational Resources Information Center
Custers, Eugene J. F. M.; Robbe, Peter F. De Vries; Stuyt, Paul M. J.
2000-01-01
Discusses clinical problem analysis (CPA) in medical education, an approach to solving complex clinical problems. Outlines the five step CPA model and examines the value of CPA's content-independent (methodical) approach. Argues that teaching students to use CPA will enable them to avoid common diagnostic reasoning errors and pitfalls. Compares…
Predicting Development of Mathematical Word Problem Solving Across the Intermediate Grades
Tolar, Tammy D.; Fuchs, Lynn; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.
2012-01-01
This study addressed predictors of the development of word problem solving (WPS) across the intermediate grades. At beginning of 3rd grade, 4 cohorts of students (N = 261) were measured on computation, language, nonverbal reasoning skills, and attentive behavior and were assessed 4 times from beginning of 3rd through end of 5th grade on 2 measures of WPS at low and high levels of complexity. Language skills were related to initial performance at both levels of complexity and did not predict growth at either level. Computational skills had an effect on initial performance in low- but not high-complexity problems and did not predict growth at either level of complexity. Attentive behavior did not predict initial performance but did predict growth in low-complexity, whereas it predicted initial performance but not growth for high-complexity problems. Nonverbal reasoning predicted initial performance and growth for low-complexity WPS, but only growth for high-complexity WPS. This evidence suggests that although mathematical structure is fixed, different cognitive resources may act as limiting factors in WPS development when the WPS context is varied. PMID:23325985
NASA Astrophysics Data System (ADS)
Steen-Eibensteiner, Janice Lee
2006-07-01
A strong science knowledge base and problem solving skills have always been highly valued for employment in the science industry. Skills currently needed for employment include being able to problem solve (Overtoom, 2000). Academia also recognizes the need for effectively teaching students to apply problem solving skills in clinical settings. This thesis investigates how students solve complex science problems in an academic setting in order to inform the development of problem solving skills for the workplace. Students' use of problem solving skills in the form of learned concepts and procedural knowledge was studied as students completed a problem that might come up in real life. Students were taking a community college sophomore biology course, Human Anatomy & Physiology II. The problem topic was negative feedback inhibition of the thyroid and parathyroid glands. The research questions answered were (1) How well do community college students use a complex of conceptual knowledge when solving a complex science problem? (2) What conceptual knowledge are community college students using correctly, incorrectly, or not using when solving a complex science problem? (3) What problem solving procedural knowledge are community college students using successfully, unsuccessfully, or not using when solving a complex science problem? From the whole class the high academic level participants performed at a mean of 72% correct on chapter test questions which was a low average to fair grade of C-. The middle and low academic participants both failed (F) the test questions (37% and 30% respectively); 29% (9/31) of the students show only a fair performance while 71% (22/31) fail. From the subset sample population of 2 students each from the high, middle, and low academic levels selected from the whole class 35% (8/23) of the concepts were used effectively, 22% (5/23) marginally, and 43% (10/23) poorly. Only 1 concept was used incorrectly by 3/6 of the students and identified as a misconception. One of 21 (5%) problem-solving pathway characteristics was used effectively, 7 (33%) marginally, and 13 (62%) poorly. There were very few (0 to 4) problem-solving pathway characteristics used unsuccessfully most were simply not used.
Complex Problem Solving in a Workplace Setting.
ERIC Educational Resources Information Center
Middleton, Howard
2002-01-01
Studied complex problem solving in the hospitality industry through interviews with six office staff members and managers. Findings show it is possible to construct a taxonomy of problem types and that the most common approach can be termed "trial and error." (SLD)
Insight and analysis problem solving in microbes to machines.
Clark, Kevin B
2015-11-01
A key feature for obtaining solutions to difficult problems, insight is oftentimes vaguely regarded as a special discontinuous intellectual process and/or a cognitive restructuring of problem representation or goal approach. However, this nearly century-old state of art devised by the Gestalt tradition to explain the non-analytical or non-trial-and-error, goal-seeking aptitude of primate mentality tends to neglect problem-solving capabilities of lower animal phyla, Kingdoms other than Animalia, and advancing smart computational technologies built from biological, artificial, and composite media. Attempting to provide an inclusive, precise definition of insight, two major criteria of insight, discontinuous processing and problem restructuring, are here reframed using terminology and statistical mechanical properties of computational complexity classes. Discontinuous processing becomes abrupt state transitions in algorithmic/heuristic outcomes or in types of algorithms/heuristics executed by agents using classical and/or quantum computational models. And problem restructuring becomes combinatorial reorganization of resources, problem-type substitution, and/or exchange of computational models. With insight bounded by computational complexity, humans, ciliated protozoa, and complex technological networks, for example, show insight when restructuring time requirements, combinatorial complexity, and problem type to solve polynomial and nondeterministic polynomial decision problems. Similar effects are expected from other problem types, supporting the idea that insight might be an epiphenomenon of analytical problem solving and consequently a larger information processing framework. Thus, this computational complexity definition of insight improves the power, external and internal validity, and reliability of operational parameters with which to classify, investigate, and produce the phenomenon for computational agents ranging from microbes to man-made devices. Copyright © 2015 Elsevier Ltd. All rights reserved.
Translating concepts of complexity to the field of ergonomics.
Walker, Guy H; Stanton, Neville A; Salmon, Paul M; Jenkins, Daniel P; Rafferty, Laura
2010-10-01
Since 1958 more than 80 journal papers from the mainstream ergonomics literature have used either the words 'complex' or 'complexity' in their titles. Of those, more than 90% have been published in only the past 20 years. This observation communicates something interesting about the way in which contemporary ergonomics problems are being understood. The study of complexity itself derives from non-linear mathematics but many of its core concepts have found analogies in numerous non-mathematical domains. Set against this cross-disciplinary background, the current paper aims to provide a similar initial mapping to the field of ergonomics. In it, the ergonomics problem space, complexity metrics and powerful concepts such as emergence raise complexity to the status of an important contingency factor in achieving a match between ergonomics problems and ergonomics methods. The concept of relative predictive efficiency is used to illustrate how this match could be achieved in practice. What is clear overall is that a major source of, and solution to, complexity are the humans in systems. Understanding complexity on its own terms offers the potential to leverage disproportionate effects from ergonomics interventions and to tighten up the often loose usage of the term in the titles of ergonomics papers. STATEMENT OF RELEVANCE: This paper reviews and discusses concepts from the study of complexity and maps them to ergonomics problems and methods. It concludes that humans are a major source of and solution to complexity in systems and that complexity is a powerful contingency factor, which should be considered to ensure that ergonomics approaches match the true nature of ergonomics problems.
Moe, Aubrey M; Breitborde, Nicholas J K; Bourassa, Kyle J; Gallagher, Colin J; Shakeel, Mohammed K; Docherty, Nancy M
2018-06-01
Schizophrenia researchers have focused on phenomenological aspects of the disorder to better understand its underlying nature. In particular, development of personal narratives-that is, the complexity with which people form, organize, and articulate their "life stories"-has recently been investigated in individuals with schizophrenia. However, less is known about how aspects of narrative relate to indicators of neurocognitive and social functioning. The objective of the present study was to investigate the association of linguistic complexity of life-story narratives to measures of cognitive and social problem-solving abilities among people with schizophrenia. Thirty-two individuals with a diagnosis of schizophrenia completed a research battery consisting of clinical interviews, a life-story narrative, neurocognitive testing, and a measure assessing multiple aspects of social problem solving. Narrative interviews were assessed for linguistic complexity using computerized technology. The results indicate differential relationships of linguistic complexity and neurocognition to domains of social problem-solving skills. More specifically, although neurocognition predicted how well one could both describe and enact a solution to a social problem, linguistic complexity alone was associated with accurately recognizing that a social problem had occurred. In addition, linguistic complexity appears to be a cognitive factor that is discernible from other broader measures of neurocognition. Linguistic complexity may be more relevant in understanding earlier steps of the social problem-solving process than more traditional, broad measures of cognition, and thus is relevant in conceptualizing treatment targets. These findings also support the relevance of developing narrative-focused psychotherapies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Complexity of the Quantum Adiabatic Algorithm
NASA Astrophysics Data System (ADS)
Hen, Itay
2013-03-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorihms. Here, we discuss several aspects of the quantum adiabatic algorithm: We analyze the efficiency of the algorithm on several ``hard'' (NP) computational problems. Studying the size dependence of the typical minimum energy gap of the Hamiltonians of these problems using quantum Monte Carlo methods, we find that while for most problems the minimum gap decreases exponentially with the size of the problem, indicating that the QAA is not more efficient than existing classical search algorithms, for other problems there is evidence to suggest that the gap may be polynomial near the phase transition. We also discuss applications of the QAA to ``real life'' problems and how they can be implemented on currently available (albeit prototypical) quantum hardware such as ``D-Wave One'', that impose serious restrictions as to which type of problems may be tested. Finally, we discuss different approaches to find improved implementations of the algorithm such as local adiabatic evolution, adaptive methods, local search in Hamiltonian space and others.
Boonen, Anton J. H.; de Koning, Björn B.; Jolles, Jelle; van der Schoot, Menno
2016-01-01
Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME. PMID:26925012
Boonen, Anton J H; de Koning, Björn B; Jolles, Jelle; van der Schoot, Menno
2016-01-01
Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME.
NASA Astrophysics Data System (ADS)
Shen, Jing; Lu, Hongwei; Zhang, Yang; Song, Xinshuang; He, Li
2016-05-01
As ecosystem management is a hotspot and urgent topic with increasing population growth and resource depletion. This paper develops an urban ecosystem vulnerability assessment method representing a new vulnerability paradigm for decision makers and environmental managers, as it's an early warning system to identify and prioritize the undesirable environmental changes in terms of natural, human, economic and social elements. The whole idea is to decompose a complex problem into sub-problem, and analyze each sub-problem, and then aggregate all sub-problems to solve this problem. This method integrates spatial context of Geographic Information System (GIS) tool, multi-criteria decision analysis (MCDA) method, ordered weighted averaging (OWA) operators, and socio-economic elements. Decision makers can find out relevant urban ecosystem vulnerability assessment results with different vulnerable attitude. To test the potential of the vulnerability methodology, it has been applied to a case study area in Beijing, China, where it proved to be reliable and consistent with the Beijing City Master Plan. The results of urban ecosystem vulnerability assessment can support decision makers in evaluating the necessary of taking specific measures to preserve the quality of human health and environmental stressors for a city or multiple cities, with identifying the implications and consequences of their decisions.
Cognition of an expert tackling an unfamiliar conceptual physics problem
NASA Astrophysics Data System (ADS)
Schuster, David; Undreiu, Adriana
2009-11-01
We have investigated and analyzed the cognition of an expert tackling a qualitative conceptual physics problem of an unfamiliar type. Our goal was to elucidate the detailed cognitive processes and knowledge elements involved, irrespective of final solution form, and consider implications for instruction. The basic but non-trivial problem was to find qualitatively the direction of acceleration of a pendulum bob at various stages of its motion, a problem originally studied by Reif and Allen. Methodology included interviews, introspection, retrospection and self-reported metacognition. Multiple facets of cognition were revealed, with different reasoning strategies used at different stages and for different points on the path. An account is given of the zigzag thinking paths and interplay of reasoning modes and schema elements involved. We interpret the cognitive processes in terms of theoretical concepts that emerged, namely: case-based, principle-based, experiential-intuitive and practical-heuristic reasoning; knowledge elements and schemata; activation; metacognition and epistemic framing. The complexity of cognition revealed in this case study contrasts with the tidy principle-based solutions we present to students. The pervasive role of schemata, case-based reasoning, practical heuristic strategies, and their interplay with physics principles is noteworthy, since these aspects of cognition are generally neither recognized nor taught. The schema/reasoning-mode perspective has direct application in science teaching, learning and problem-solving.
Fluid-solid coupled simulation of the ignition transient of solid rocket motor
NASA Astrophysics Data System (ADS)
Li, Qiang; Liu, Peijin; He, Guoqiang
2015-05-01
The first period of the solid rocket motor operation is the ignition transient, which involves complex processes and, according to chronological sequence, can be divided into several stages, namely, igniter jet injection, propellant heating and ignition, flame spreading, chamber pressurization and solid propellant deformation. The ignition transient should be comprehensively analyzed because it significantly influences the overall performance of the solid rocket motor. A numerical approach is presented in this paper for simulating the fluid-solid interaction problems in the ignition transient of the solid rocket motor. In the proposed procedure, the time-dependent numerical solutions of the governing equations of internal compressible fluid flow are loosely coupled with those of the geometrical nonlinearity problems to determine the propellant mechanical response and deformation. The well-known Zeldovich-Novozhilov model was employed to model propellant ignition and combustion. The fluid-solid coupling interface data interpolation scheme and coupling instance for different computational agents were also reported. Finally, numerical validation was performed, and the proposed approach was applied to the ignition transient of one laboratory-scale solid rocket motor. For the application, the internal ballistics were obtained from the ground hot firing test, and comparisons were made. Results show that the integrated framework allows us to perform coupled simulations of the propellant ignition, strong unsteady internal fluid flow, and propellant mechanical response in SRMs with satisfactory stability and efficiency and presents a reliable and accurate solution to complex multi-physics problems.
Poot, Antonius J.; den Elzen, Wendy P. J.; Blom, Jeanet W.; Gussekloo, Jacobijn
2014-01-01
Background Satisfaction is widely used to evaluate and direct delivery of medical care; a complicated relationship exists between patient satisfaction, morbidity and age. This study investigates the relationships between complexity of health problems and level of patient satisfaction of older persons with their general practitioner (GP) and practice. Methods and Findings This study is embedded in the ISCOPE (Integrated Systematic Care for Older Persons) study. Enlisted patients aged ≥75 years from 59 practices received a written questionnaire to screen for complex health problems (somatic, functional, psychological and social). For 2664 randomly chosen respondents (median age 82 years; 68% female) information was collected on level of satisfaction (satisfied, neutral, dissatisfied) with their GP and general practice, and demographic and clinical characteristics including complexity of health problems. Of all participants, 4% was dissatisfied with their GP care, 59% neutral and 37% satisfied. Between these three categories no differences were observed in age, gender, country of birth or education level. The percentage of participants dissatisfied with their GP care increased from 0.4% in those with 0 problem domains to 8% in those with 4 domains, i.e. having complex health problems (p<0.001). Per additional health domain with problems, the risk of being dissatisfied increased 1.7 times (95% CI 1.4–2.14; p<0.001). This was independent of age, gender, and demographic and clinical parameters (adjusted OR 1.4, 95% CI 1.1–1.8; p = 0.021). Conclusion In older persons, dissatisfaction with general practice is strongly correlated with rising complexity of health problems, independent of age, demographic and clinical parameters. It remains unclear whether complexity of health problems is a patient characteristic influencing the perception of care, or whether the care is unable to handle the demands of these patients. Prospective studies are needed to investigate the causal associations between care organization, patient characteristics, indicators of quality, and patient perceptions. PMID:24710557
Poot, Antonius J; den Elzen, Wendy P J; Blom, Jeanet W; Gussekloo, Jacobijn
2014-01-01
Satisfaction is widely used to evaluate and direct delivery of medical care; a complicated relationship exists between patient satisfaction, morbidity and age. This study investigates the relationships between complexity of health problems and level of patient satisfaction of older persons with their general practitioner (GP) and practice. This study is embedded in the ISCOPE (Integrated Systematic Care for Older Persons) study. Enlisted patients aged ≥75 years from 59 practices received a written questionnaire to screen for complex health problems (somatic, functional, psychological and social). For 2664 randomly chosen respondents (median age 82 years; 68% female) information was collected on level of satisfaction (satisfied, neutral, dissatisfied) with their GP and general practice, and demographic and clinical characteristics including complexity of health problems. Of all participants, 4% was dissatisfied with their GP care, 59% neutral and 37% satisfied. Between these three categories no differences were observed in age, gender, country of birth or education level. The percentage of participants dissatisfied with their GP care increased from 0.4% in those with 0 problem domains to 8% in those with 4 domains, i.e. having complex health problems (p<0.001). Per additional health domain with problems, the risk of being dissatisfied increased 1.7 times (95% CI 1.4-2.14; p<0.001). This was independent of age, gender, and demographic and clinical parameters (adjusted OR 1.4, 95% CI 1.1-1.8; p = 0.021). In older persons, dissatisfaction with general practice is strongly correlated with rising complexity of health problems, independent of age, demographic and clinical parameters. It remains unclear whether complexity of health problems is a patient characteristic influencing the perception of care, or whether the care is unable to handle the demands of these patients. Prospective studies are needed to investigate the causal associations between care organization, patient characteristics, indicators of quality, and patient perceptions.
Cardone, A.; Bornstein, A.; Pant, H. C.; Brady, M.; Sriram, R.; Hassan, S. A.
2015-01-01
A method is proposed to study protein-ligand binding in a system governed by specific and non-specific interactions. Strong associations lead to narrow distributions in the proteins configuration space; weak and ultra-weak associations lead instead to broader distributions, a manifestation of non-specific, sparsely-populated binding modes with multiple interfaces. The method is based on the notion that a discrete set of preferential first-encounter modes are metastable states from which stable (pre-relaxation) complexes at equilibrium evolve. The method can be used to explore alternative pathways of complexation with statistical significance and can be integrated into a general algorithm to study protein interaction networks. The method is applied to a peptide-protein complex. The peptide adopts several low-population conformers and binds in a variety of modes with a broad range of affinities. The system is thus well suited to analyze general features of binding, including conformational selection, multiplicity of binding modes, and nonspecific interactions, and to illustrate how the method can be applied to study these problems systematically. The equilibrium distributions can be used to generate biasing functions for simulations of multiprotein systems from which bulk thermodynamic quantities can be calculated. PMID:25782918
Identity: a complex structure for researching students' academic behavior in science and mathematics
NASA Astrophysics Data System (ADS)
Aydeniz, Mehmet; Hodge, Lynn Liao
2011-06-01
This article is a response to Pike and Dunne's research. The focus of their analysis is on reflections of studying science post-16. Pike and Dunne draw attention to under enrollments in science, technology, engineering, and mathematics (STEM) fields, in particular, in the field of physics, chemistry and biology in the United Kingdom. We provide an analysis of how the authors conceptualize the problem of scientific career choices, the theoretical framework through which they study the problem, and the methodology they use to collect and analyze data. In addition, we examine the perspective they provide in light of new developments in the field of students' attitudes towards science and mathematics. More precisely, we draw attention to and explicate the authors' use of identity from the perspective of emerging theories that explore the relationships between the learner and culture in the context of science and mathematics.
Spectral partitioning in equitable graphs.
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.; ,
1985-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Binienda, W. K.; Tan, H. Q.; Xu, M. H.
1992-01-01
Analytical derivations of stress intensity factors (SIF's) of a multicracked plate can be complex and tedious. Recent advances, however, in intelligent application of symbolic computation can overcome these difficulties and provide the means to rigorously and efficiently analyze this class of problems. Here, the symbolic algorithm required to implement the methodology described in Part 1 is presented. The special problem-oriented symbolic functions to derive the fundamental kernels are described, and the associated automatically generated FORTRAN subroutines are given. As a result, a symbolic/FORTRAN package named SYMFRAC, capable of providing accurate SIF's at each crack tip, was developed and validated. Simple illustrative examples using SYMFRAC show the potential of the present approach for predicting the macrocrack propagation path due to existing microcracks in the vicinity of a macrocrack tip, when the influence of the microcrack's location, orientation, size, and interaction are taken into account.
Spectral partitioning in equitable graphs
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
The Integrated Airframe/Propulsion Control System Architecture program (IAPSA)
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Cohen, Gerald C.; Meissner, Charles W.
1990-01-01
The Integrated Airframe/Propulsion Control System Architecture program (IAPSA) is a two-phase program which was initiated by NASA in the early 80s. The first phase, IAPSA 1, studied different architectural approaches to the problem of integrating engine control systems with airframe control systems in an advanced tactical fighter. One of the conclusions of IAPSA 1 was that the technology to construct a suitable system was available, yet the ability to create these complex computer architectures has outpaced the ability to analyze the resulting system's performance. With this in mind, the second phase of IAPSA approached the same problem with the added constraint that the system be designed for validation. The intent of the design for validation requirement is that validation requirements should be shown to be achievable early in the design process. IAPSA 2 has demonstrated that despite diligent efforts, integrated systems can retain characteristics which are difficult to model and, therefore, difficult to validate.
Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy
NASA Astrophysics Data System (ADS)
Sharma, Sanjib
2017-08-01
Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.
Two implementations of the Expert System for the Flight Analysis System (ESFAS) project
NASA Technical Reports Server (NTRS)
Wang, Lui
1988-01-01
A comparison is made between the two most sophisticated expert system building tools, the Automated Reasoning Tool (ART) and the Knowledge Engineering Environment (KEE). The same problem domain (ESFAS) was used in making the comparison. The Expert System for the Flight Analysis System (ESFAS) acts as an intelligent front end for the Flight Analysis System (FAS). FAS is a complex configuration controlled set of interrelated processors (FORTRAN routines) which will be used by the Mission Planning and Analysis Div. (MPAD) to design and analyze Shuttle and potential Space Station missions. Implementations of ESFAS are described. The two versions represent very different programming paradigms; ART uses rules and KEE uses objects. Due to each of the tools philosophical differences, KEE is implemented using a depth first traversal algorithm, whereas ART uses a user directed traversal method. Either tool could be used to solve this particular problem.
Research and simulation of the decoupling transformation in AC motor vector control
NASA Astrophysics Data System (ADS)
He, Jiaojiao; Zhao, Zhongjie; Liu, Ken; Zhang, Yongping; Yao, Tuozhong
2018-04-01
Permanent magnet synchronous motor (PMSM) is a nonlinear, strong coupling, multivariable complex object, and transformation decoupling can solve the coupling problem of permanent magnet synchronous motor. This paper gives a permanent magnet synchronous motor (PMSM) mathematical model, introduces the permanent magnet synchronous motor vector control coordinate transformation in the process of modal matrix inductance matrix transform through the matrix related knowledge of different coordinates of diagonalization, which makes the coupling between the independent, realize the control of motor current and excitation the torque current coupling separation, and derived the coordinate transformation matrix, the thought to solve the coupling problem of AC motor. Finally, in the Matlab/Simulink environment, through the establishment and combination between the PMSM ontology, coordinate conversion module, built the simulation model of permanent magnet synchronous motor vector control, introduces the model of each part, and analyzed the simulation results.
Bourdieu at the bedside: briefing parents in a pediatric hospital.
LeGrow, Karen; Hodnett, Ellen; Stremler, Robyn; McKeever, Patricia; Cohen, Eyal
2014-12-01
The philosophy of family-centered care (FCC) promotes partnerships between families and staff to plan, deliver, and evaluate services for children and has been officially adopted by a majority of pediatric hospitals throughout North America. However, studies indicated that many parents have continued to be dissatisfied with their decision-making roles in their child's care. This is particularly salient for parents of children with chronic ongoing complex health problems. These children are dependent upon medical technology and require frequent hospitalizations during which parents must contribute to difficult decisions regarding their child's care. Given this clinical issue, an alternative theoretical perspective was explored to redress this problem. Pierre Bourdieu's theoretical concepts of field, capital, and habitus were used to analyze the hierarchical relationships in pediatric acute care hospitals and to design a briefing intervention aimed at improving parents' satisfaction with decision making in that health care setting. © 2014 John Wiley & Sons Ltd.
New Polymer Materials for the Laser Sintering Process: Polypropylene and Others
NASA Astrophysics Data System (ADS)
Wegner, Andreas
Laser sintering of polymers gets more and more importance for small series production. However, there is only a little number of materials available for the process. In most cases parts are build up using polyamide 12 or polyamide 11. Reasons for that are high prices, a restricted availability, poor mechanical part properties or an insufficient understanding of the processing of other materials. These problems result from the complex processing conditions in laser sintering with high requirements on the material's characteristics. Within this area, at the chair for manufacturing technology fundamental knowledge was established. Aim of the presented study was to qualify different polymers for the laser sintering process. Polyethylene, polypropylene, polyamide 6, polyoxymethylene as well as polybutylene terephthalate were analyzed. Within the study problems of qualifying new materials are discussed using some examples. Furthermore, the processing conditions as well as mechanical properties of a new polypropylene compound are shown considering also different laser sintering machines.
Navier-Stokes simulation of the crossflow instability in swept-wing flows
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1989-01-01
The computational modeling of the transition process characteristic of flows over swept wings are described. Specifically, the crossflow instability and crossflow/T-S wave interactions are analyzed through the numerical solution of the full three-dimensional Navier-Stokes equations including unsteadiness, curvature, and sweep. This approach is chosen because of the complexity of the problem and because it appears that linear stability theory is insufficient to explain the discrepancies between different experiments and between theory and experiments. The leading edge region of a swept wing is considered in a three-dimensional spatial simulation with random disturbances as the initial conditions. The work has been closely coordinated with the experimental program of Professor William Saric, examining the same problem. Comparisons with NASA flight test data and the experiments at Arizona State University were a necessary and an important integral part of this work.
Data reliability in complex directed networks
NASA Astrophysics Data System (ADS)
Sanz, Joaquín; Cozzo, Emanuele; Moreno, Yamir
2013-12-01
The availability of data from many different sources and fields of science has made it possible to map out an increasing number of networks of contacts and interactions. However, quantifying how reliable these data are remains an open problem. From Biology to Sociology and Economics, the identification of false and missing positives has become a problem that calls for a solution. In this work we extend one of the newest, best performing models—due to Guimerá and Sales-Pardo in 2009—to directed networks. The new methodology is able to identify missing and spurious directed interactions with more precision than previous approaches, which renders it particularly useful for analyzing data reliability in systems like trophic webs, gene regulatory networks, communication patterns and several social systems. We also show, using real-world networks, how the method can be employed to help search for new interactions in an efficient way.
Casimir-Polder shifts on quantum levitation states
NASA Astrophysics Data System (ADS)
Crépin, P.-P.; Dufour, G.; Guérout, R.; Lambrecht, A.; Reynaud, S.
2017-03-01
An ultracold atom above a horizontal mirror experiences quantum reflection from the attractive Casimir-Polder interaction, which holds it against gravity and leads to quantum levitation states. We analyze this system by using a Liouville transformation of the Schrödinger equation and a Langer coordinate adapted to problems with a classical turning point. Reflection on the Casimir-Polder attractive well is replaced by reflection on a repulsive wall, and the problem is then viewed as an ultracold atom trapped inside a cavity with gravity and Casimir-Polder potentials acting, respectively, as top and bottom mirrors. We calculate numerically Casimir-Polder shifts of the energies of the cavity resonances and propose an approximate treatment which is precise enough to discuss spectroscopy experiments aimed at tests of the weak-equivalence principle on antihydrogen. We also discuss the lifetimes by calculating complex energies associated with cavity resonances.
Empty tracks optimization based on Z-Map model
NASA Astrophysics Data System (ADS)
Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao
2017-12-01
For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.
Computational modelling of cellular level metabolism
NASA Astrophysics Data System (ADS)
Calvetti, D.; Heino, J.; Somersalo, E.
2008-07-01
The steady and stationary state inverse problems consist of estimating the reaction and transport fluxes, blood concentrations and possibly the rates of change of some of the concentrations based on data which are often scarce noisy and sampled over a population. The Bayesian framework provides a natural setting for the solution of this inverse problem, because a priori knowledge about the system itself and the unknown reaction fluxes and transport rates can compensate for the insufficiency of measured data, provided that the computational costs do not become prohibitive. This article identifies the computational challenges which have to be met when analyzing the steady and stationary states of multicompartment model for cellular metabolism and suggest stable and efficient ways to handle the computations. The outline of a computational tool based on the Bayesian paradigm for the simulation and analysis of complex cellular metabolic systems is also presented.
NASA Astrophysics Data System (ADS)
Porter, William J.; Drut, Joaquín E.
2017-05-01
Path-integral analyses originally pioneered in the study of the complex-phase problem afflicting lattice calculations of finite-density quantum chromodynamics are generalized to nonrelativistic Fermi gases with repulsive interactions. Using arguments similar to those previously applied to relativistic theories, we show that the analogous problem in nonrelativistic systems manifests itself naturally in Tan's contact as a nontrivial cancellation between terms with varied dependence on extensive thermodynamic quantities. We analyze that case under the assumption of a Gaussian phase distribution, which is supported by our Monte Carlo calculations and perturbative considerations. We further generalize these results to observables other than the contact, as well as to polarized systems and systems with fixed particle number. Our results are quite general in that they apply to repulsive multicomponent fermions, they are independent of dimensionality or trapping potential, and they hold in the ground state as well as at finite temperature.
An efficient hybrid technique in RCS predictions of complex targets at high frequencies
NASA Astrophysics Data System (ADS)
Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe
2017-09-01
Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.
On Complex Water Conflicts: Role of Enabling Conditions for Pragmatic Resolution
NASA Astrophysics Data System (ADS)
Islam, S.; Choudhury, E.
2016-12-01
Many of our current and emerging water problems are interconnected and cross boundaries, domains, scales, and sectors. These boundary crossing water problems are neither static nor linear; but often are interconnected nonlinearly with other problems and feedback. The solution space for these complex problems - involving interdependent variables, processes, actors, and institutions - can't be pre-stated. We need to recognize the disconnect among values, interests, and tools as well as problems, policies, and politics. Scientific and technological solutions are desired for efficiency and reliability, but need to be politically feasible and actionable. Governing and managing complex water problems require difficult tradeoffs in exploring and sharing benefits and burdens through carefully crafted negotiation processes. The crafting of such negotiation process, we argue, constitutes a pragmatic approach to negotiation - one that is based on the identification of enabling conditions - as opposed to mechanistic casual explanations, and rooted in contextual conditions to specify and ensure the principles of equity and sustainability. We will use two case studies to demonstrate the efficacy of the proposed principled pragmatic approcah to address complex water problems.
NASA Astrophysics Data System (ADS)
Ollé, Mercè; Pacha, Joan R.
1999-11-01
In the present work we use certain isolated symmetric periodic orbits found in some limiting Restricted Three-Body Problems to obtain, by numerical continuation, families of symmetric periodic orbits of the more general Spatial Elliptic Restricted Three Body Problem. In particular, the Planar Isosceles Restricted Three Body Problem, the Sitnikov Problem and the MacMillan problem are considered. A stability study for the periodic orbits of the families obtained - specially focused to detect transitions to complex instability - is also made.
NASA Technical Reports Server (NTRS)
Cahan, Boris D.
1991-01-01
The Iterative Boundary Integral Equation Method (I-BIEM) has been applied to the problem of frequency dispersion at a disk electrode in a finite geometry. The I-BIEM permits the direct evaluation of the AC potential (a complex variable) using complex boundary conditions. The point spacing was made highly nonuniform, to give extremely high resolution in those regions where the variables change most rapidly, i.e., in the vicinity of the edge of the disk. Results are analyzed with respect to IR correction, equipotential surfaces, and reference electrode placement. The current distribution is also examined for a ring-disk configuration, with the ring and the disk at the same AC potential. It is shown that the apparent impedance of the disk is inductive at higher frequencies. The results are compared to analytic calculations from the literature, and usually agree to better than 0.001 percent.
NASA Technical Reports Server (NTRS)
Cahan, Boris D.
1991-01-01
The Iterative Boundary Integral Equation Method (I-BIEM) has been applied to the problem of frequency dispersion at a disk electrode in a finite geometry. The I-BIEM permits the direct evaluation of the AC potential (a complex variable) using complex boundary conditions. The point spacing was made highly nonuniform, to give extremely high resolution in those regions where the variables change most rapidly, i.e., in the vicinity of the edge of the disk. Results are analyzed with respect to IR correction, equipotential surfaces, and reference electrode placement. The current distribution is also examined for a ring-disk configuration, with the ring and the disk at the same AC potential. It is shown that the apparent impedance of the disk is inductive at higher frequencies. The results are compared to analytic calculations from the literature, and usually agree to better than 0.001 percent.
Compressed learning and its applications to subcellular localization.
Zheng, Zhong-Long; Guo, Li; Jia, Jiong; Xie, Chen-Mao; Zeng, Wen-Cai; Yang, Jie
2011-09-01
One of the main challenges faced by biological applications is to predict protein subcellular localization in automatic fashion accurately. To achieve this in these applications, a wide variety of machine learning methods have been proposed in recent years. Most of them focus on finding the optimal classification scheme and less of them take the simplifying the complexity of biological systems into account. Traditionally, such bio-data are analyzed by first performing a feature selection before classification. Motivated by CS (Compressed Sensing) theory, we propose the methodology which performs compressed learning with a sparseness criterion such that feature selection and dimension reduction are merged into one analysis. The proposed methodology decreases the complexity of biological system, while increases protein subcellular localization accuracy. Experimental results are quite encouraging, indicating that the aforementioned sparse methods are quite promising in dealing with complicated biological problems, such as predicting the subcellular localization of Gram-negative bacterial proteins.
A multistage motion vector processing method for motion-compensated frame interpolation.
Huang, Ai- Mei; Nguyen, Truong Q
2008-05-01
In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.
Observe, simplify, titrate, model, and synthesize: A paradigm for analyzing behavior
Alberts, Jeffrey R.
2013-01-01
Phenomena in behavior and their underlying neural mechanisms are exquisitely complex problems. Infrequently do we reflect on our basic strategies of investigation and analysis, or formally confront the actual challenges of achieving an understanding of the phenomena that inspire research. Philip Teitelbaum is distinct in his elegant approaches to understanding behavioral phenomena and their associated neural processes. He also articulated his views on effective approaches to scientific analyses of brain and behavior, his vision of how behavior and the nervous system are patterned, and what constitutes basic understanding. His rubrics involve careful observation and description of behavior, simplification of the complexity, analysis of elements, and re-integration through different forms of synthesis. Research on the development of huddling behavior by individual and groups of rats is reviewed in a context of Teitelbaum’s rubrics of research, with the goal of appreciating his broad and positive influence on the scientific community. PMID:22481081
Expert systems for space power supply - Design, analysis, and evaluation
NASA Technical Reports Server (NTRS)
Cooper, Ralph S.; Thomson, M. Kemer; Hoshor, Alan
1987-01-01
The feasibility of applying expert systems to the conceptual design, analysis, and evaluation of space power supplies in particular, and complex systems in general is evaluated. To do this, the space power supply design process and its associated knowledge base were analyzed and characterized in a form suitable for computer emulation of a human expert. The existing expert system tools and the results achieved with them were evaluated to assess their applicability to power system design. Some new concepts for combining program architectures (modular expert systems and algorithms) with information about the domain were applied to create a 'deep' system for handling the complex design problem. NOVICE, a code to solve a simplified version of a scoping study of a wide variety of power supply types for a broad range of missions, has been developed, programmed, and tested as a concrete feasibility demonstration.
Modeling And Simulation Of Bar Code Scanners Using Computer Aided Design Software
NASA Astrophysics Data System (ADS)
Hellekson, Ron; Campbell, Scott
1988-06-01
Many optical systems have demanding requirements to package the system in a small 3 dimensional space. The use of computer graphic tools can be a tremendous aid to the designer in analyzing the optical problems created by smaller and less costly systems. The Spectra Physics grocery store bar code scanner employs an especially complex 3 dimensional scan pattern to read bar code labels. By using a specially written program which interfaces with a computer aided design system, we have simulated many of the functions of this complex optical system. In this paper we will illustrate how a recent version of the scanner has been designed. We will discuss the use of computer graphics in the design process including interactive tweaking of the scan pattern, analysis of collected light, analysis of the scan pattern density, and analysis of the manufacturing tolerances used to build the scanner.
A survey of noninteractive zero knowledge proof system and its applications.
Wu, Huixin; Wang, Feng
2014-01-01
Zero knowledge proof system which has received extensive attention since it was proposed is an important branch of cryptography and computational complexity theory. Thereinto, noninteractive zero knowledge proof system contains only one message sent by the prover to the verifier. It is widely used in the construction of various types of cryptographic protocols and cryptographic algorithms because of its good privacy, authentication, and lower interactive complexity. This paper reviews and analyzes the basic principles of noninteractive zero knowledge proof system, and summarizes the research progress achieved by noninteractive zero knowledge proof system on the following aspects: the definition and related models of noninteractive zero knowledge proof system, noninteractive zero knowledge proof system of NP problems, noninteractive statistical and perfect zero knowledge, the connection between noninteractive zero knowledge proof system, interactive zero knowledge proof system, and zap, and the specific applications of noninteractive zero knowledge proof system. This paper also points out the future research directions.
Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Tracked robot controllers for climbing obstacles autonomously
NASA Astrophysics Data System (ADS)
Vincent, Isabelle
2009-05-01
Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.
Liang, Jennifer J; Tsou, Ching-Huei; Devarakonda, Murthy V
2017-01-01
Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the "ground truth" dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort.
Complex fuzzy soft expert sets
NASA Astrophysics Data System (ADS)
Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak
2017-04-01
Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.
ERIC Educational Resources Information Center
Bogard, Treavor; Liu, Min; Chiang, Yueh-hui Vanessa
2013-01-01
This multiple-case study examined how advanced learners solved a complex problem, focusing on how their frequency and application of cognitive processes contributed to differences in performance outcomes, and developing a mental model of a problem. Fifteen graduate students with backgrounds related to the problem context participated in the study.…
Global dynamic optimization approach to predict activation in metabolic pathways.
de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R
2014-01-06
During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.
The Complex Route to Success: Complex Problem-Solving Skills in the Prediction of University Success
ERIC Educational Resources Information Center
Stadler, Matthias J.; Becker, Nicolas; Greiff, Samuel; Spinath, Frank M.
2016-01-01
Successful completion of a university degree is a complex matter. Based on considerations regarding the demands of acquiring a university degree, the aim of this paper was to investigate the utility of complex problem-solving (CPS) skills in the prediction of objective and subjective university success (SUS). The key finding of this study was that…
Wang, Yong-Feng; Zhang, Fang-Qiu; Gu, Ji-Dong
2014-06-01
Denaturing gradient gel electrophoresis (DGGE) is a powerful technique to reveal the community structures and composition of microorganisms in complex natural environments and samples. However, positive and reproducible polymerase chain reaction (PCR) products, which are difficult to acquire for some specific samples due to low abundance of the target microorganisms, significantly impair the effective applications of DGGE. Thus, nested PCR is often introduced to generate positive PCR products from the complex samples, but one problem is also introduced: The total number of thermocycling in nested PCR is usually unacceptably high, which results in skewed community structures by generation of random or mismatched PCR products on the DGGE gel, and this was demonstrated in this study. Furthermore, nested PCR could not resolve the uneven representative issue with PCR products of complex samples with unequal richness of microbial population. In order to solve the two problems in nested PCR, the general protocol was modified and improved in this study. Firstly, a general PCR procedure was used to amplify the target genes with the PCR primers without any guanine cytosine (GC) clamp, and then, the resultant PCR products were purified and diluted to 0.01 μg ml(-1). Subsequently, the diluted PCR products were utilized as templates to amplify again with the same PCR primers with the GC clamp for 17 cycles, and the products were finally subjected to DGGE analysis. We demonstrated that this is a much more reliable approach to obtain a high quality DGGE profile with high reproducibility. Thus, we recommend the adoption of this improved protocol in analyzing microorganisms of low abundance in complex samples when applying the DGGE fingerprinting technique to avoid biased results.
Analysis of Trajectory Flexibility Preservation Impact on Traffic Complexity
NASA Technical Reports Server (NTRS)
Idris, Husni; El-Wakil, Tarek; Wing, David J.
2009-01-01
The growing demand for air travel is increasing the need for mitigation of air traffic congestion and complexity problems, which are already at high levels. At the same time new information and automation technologies are enabling the distribution of tasks and decisions from the service providers to the users of the air traffic system, with potential capacity and cost benefits. This distribution of tasks and decisions raises the concern that independent user actions will decrease the predictability and increase the complexity of the traffic system, hence inhibiting and possibly reversing any potential benefits. In answer to this concern, the authors proposed the introduction of decision-making metrics for preserving user trajectory flexibility. The hypothesis is that such metrics will make user actions naturally mitigate traffic complexity. In this paper, the impact of using these metrics on traffic complexity is investigated. The scenarios analyzed include aircraft in en route airspace with each aircraft meeting a required time of arrival in a one-hour time horizon while mitigating the risk of loss of separation with the other aircraft, thus preserving its trajectory flexibility. The experiments showed promising results in that the individual trajectory flexibility preservation induced self-separation and self-organization effects in the overall traffic situation. The effects were quantified using traffic complexity metrics, namely dynamic density indicators, which indicated that using the flexibility metrics reduced aircraft density and the potential of loss of separation.
Stamovlasis, Dimitrios; Tsaparlis, Georgios
2003-07-01
The present study examines the role of limited human channel capacity from a science education perspective. A model of science problem solving has been previously validated by applying concepts and tools of complexity theory (the working memory, random walk method). The method correlated the subjects' rank-order achievement scores in organic-synthesis chemistry problems with the subjects' working memory capacity. In this work, we apply the same nonlinear approach to a different data set, taken from chemical-equilibrium problem solving. In contrast to the organic-synthesis problems, these problems are algorithmic, require numerical calculations, and have a complex logical structure. As a result, these problems cause deviations from the model, and affect the pattern observed with the nonlinear method. In addition to Baddeley's working memory capacity, the Pascual-Leone's mental (M-) capacity is examined by the same random-walk method. As the complexity of the problem increases, the fractal dimension of the working memory random walk demonstrates a sudden drop, while the fractal dimension of the M-capacity random walk decreases in a linear fashion. A review of the basic features of the two capacities and their relation is included. The method and findings have consequences for problem solving not only in chemistry and science education, but also in other disciplines.
NASA Astrophysics Data System (ADS)
Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan
2017-12-01
We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking, formulation and combination of equations require conceptual reasoning; simplification of equations requires manipulation of equations as computational tools. Mathematical complexity is operationally defined by the number and the type of equations to be manipulated concurrently due to the number of unknowns in each equation. We use two types of synthesis problems, namely, sequential and simultaneous tasks. Sequential synthesis tasks require a chronological application of pertinent concepts, and simultaneous synthesis tasks require a concurrent application of the pertinent concepts. A total of 179 physics major students from a second year mechanics course participated in the study. Data were collected from written tasks and individual interviews. Results show that mathematical complexity negatively influences the students' mathematical performance on both types of synthesis problems. However, for the sequential synthesis tasks, it interferes only with the students' simplification of equations. For the simultaneous synthesis tasks, mathematical complexity additionally impedes the students' formulation and combination of equations. Several reasons may explain this difference, including the students' different approaches to the two types of synthesis problems, cognitive load, and the variation of mathematical complexity within each synthesis type.
Qualitative review of usability problems in health information systems for radiology.
Dias, Camila Rodrigues; Pereira, Marluce Rodrigues; Freire, André Pimenta
2017-12-01
Radiology processes are commonly supported by Radiology Information System (RIS), Picture Archiving and Communication System (PACS) and other software for radiology. However, these information technologies can present usability problems that affect the performance of radiologists and physicians, especially considering the complexity of the tasks involved. The purpose of this study was to extract, classify and analyze qualitatively the usability problems in PACS, RIS and other software for radiology. A systematic review was performed to extract usability problems reported in empirical usability studies in the literature. The usability problems were categorized as violations of Nielsen and Molich's usability heuristics. The qualitative analysis indicated the causes and the effects of the identified usability problems. From the 431 papers initially identified, 10 met the study criteria. The analysis of the papers identified 90 instances of usability problems, classified into categories corresponding to established usability heuristics. The five heuristics with the highest number of instances of usability problems were "Flexibility and efficiency of use", "Consistency and standards", "Match between system and the real world", "Recognition rather than recall" and "Help and documentation", respectively. These problems can make the interaction time consuming, causing delays in tasks, dissatisfaction, frustration, preventing users from enjoying all the benefits and functionalities of the system, as well as leading to more errors and difficulties in carrying out clinical analyses. Furthermore, the present paper showed a lack of studies performed on systems for radiology, especially usability evaluations using formal methods of evaluation involving the final users. Copyright © 2017 Elsevier Inc. All rights reserved.
Gao, Xuanbo; Chang, Zhenyang; Dai, Wei; Tong, Ting; Zhang, Wanfeng; He, Sheng; Zhu, Shukui
2014-10-01
Abundant geochemical information can be acquired by analyzing the chemical compositions of petroleum geological samples. The information obtained from the analysis provides scientifical evidences for petroleum exploration. However, these samples are complicated and can be easily influenced by physical (e. g. evaporation, emulsification, natural dispersion, dissolution and sorption), chemical (photodegradation) and biological (mainly microbial degradation) weathering processes. Therefore, it is very difficult to analyze the petroleum geological samples and they cannot be effectively separated by traditional gas chromatography/mass spectrometry. A newly developed separation technique, comprehensive two-dimensional gas chromatography (GC x GC), has unique advantages in complex sample analysis, and recently it has been applied to petroleum geological samples. This article mainly reviews the research progres- ses in the last five years, the main problems and the future research about GC x GC applied in the area of petroleum geology.
Kwon, Jae-Sung; Oh, Duck-Won
2015-06-01
The purpose of this study was to demonstrate the use of task-based cognitive tests to detect potential problems in the assessment of work training for vocational rehabilitation. Eleven participants with a normal range of cognitive functioning scores were recruited for this study. Participants were all trainees who participated in a vocational training program. The Rey Complex Figure Test and the Allen Cognitive Level Screen were randomly administered to all participants. Responses to the tests were qualitatively analyzed with matrix and scatter charts. Observational outcomes derived from the tests indicated that response errors, distortions, and behavioral problems occurred in most participants. These factors may impede occupational performance despite normal cognitive function. These findings suggest that the use of task-based tests may be beneficial for detecting potential problems associated with the work performance of people with disabilities. Specific analysis using the task-based tests may be necessary to complete the decision-making process for vocational aptness. Furthermore, testing should be led by professionals with a higher specialization in this field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrada, J.J.; Osborne-Lee, I.W.; Grizzaffi, P.A.
Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applicationsmore » of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.« less
Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
NASA Astrophysics Data System (ADS)
Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo
2017-04-01
In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex (smooth) objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the application of the framework to some resource allocation problems in communication networks. In particular, we consider two non-trivial case-study applications, namely: (generalizations of) i) the rate profile maximization in MIMO interference broadcast networks; and the ii) the max-min fair multicast multigroup beamforming problem in a multi-cell environment. We develop a new class of algorithms enjoying the following distinctive features: i) they are \\emph{distributed} across the base stations (with limited signaling) and lead to subproblems whose solutions are computable in closed form; and ii) differently from current relaxation-based schemes (e.g., semidefinite relaxation), they are proved to always converge to d-stationary solutions of the aforementioned class of nonconvex problems. Numerical results show that the proposed (distributed) schemes achieve larger worst-case rates (resp. signal-to-noise interference ratios) than state-of-the-art centralized ones while having comparable computational complexity.
Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.
Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo
2018-05-10
The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.
Modern Church Construction in Urals. Problems and Prospects
NASA Astrophysics Data System (ADS)
Surin, D. N.; Tereshina, O. B.
2017-11-01
The article analyzes the problems of the modern Orthodox church architecture in Russia, special attention is paid to the problems of the Ural region. It justifies the importance of addressing to this issue connected with the Orthodox traditions revival in Russia over the last decades and the need to compensate for tens of thousands of the churches destroyed in the Soviet period. The works on the theory and history of the Russian architecture and art, studies of the architectural heritage and the art of building of the Ural craftsmen are used as a scientific and methodological base for the church architecture development. The article discloses the historically formed architectural features of the Russian Orthodox churches the artistic image of which is designed to create a certain religious and aesthetic experience. It is stated that the restoration of the Russian church construction tradition is possible on the background of architectural heritage. It sets the tendencies and vital tasks in church construction and outlines a complex of measures to solve these tasks at the public and regional levels.
ERIC Educational Resources Information Center
Hay, M. Cameron
2017-01-01
Undergraduate student learning focuses on the development of disciplinary strength in majors and minors so that students gain depth in particular fields, foster individual expertise, and learn problem solving from disciplinary perspectives. However, the complexities of real-world problems do not respect disciplinary boundaries. Complex problems…
The Process of Solving Complex Problems
ERIC Educational Resources Information Center
Fischer, Andreas; Greiff, Samuel; Funke, Joachim
2012-01-01
This article is about Complex Problem Solving (CPS), its history in a variety of research domains (e.g., human problem solving, expertise, decision making, and intelligence), a formal definition and a process theory of CPS applicable to the interdisciplinary field. CPS is portrayed as (a) knowledge acquisition and (b) knowledge application…
Communities of Practice: A New Approach to Solving Complex Educational Problems
ERIC Educational Resources Information Center
Cashman, J.; Linehan, P.; Rosser, M.
2007-01-01
Communities of Practice offer state agency personnel a promising approach for engaging stakeholder groups in collaboratively solving complex and, often, persistent problems in special education. Communities of Practice can help state agency personnel drive strategy, solve problems, promote the spread of best practices, develop members'…
6 Essential Questions for Problem Solving
ERIC Educational Resources Information Center
Kress, Nancy Emerson
2017-01-01
One of the primary expectations that the author has for her students is for them to develop greater independence when solving complex and unique mathematical problems. The story of how the author supports her students as they gain confidence and independence with complex and unique problem-solving tasks, while honoring their expectations with…
Students' and Teachers' Conceptual Metaphors for Mathematical Problem Solving
ERIC Educational Resources Information Center
Yee, Sean P.
2017-01-01
Metaphors are regularly used by mathematics teachers to relate difficult or complex concepts in classrooms. A complex topic of concern in mathematics education, and most STEM-based education classes, is problem solving. This study identified how students and teachers contextualize mathematical problem solving through their choice of metaphors.…
ERIC Educational Resources Information Center
Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan
2017-01-01
We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking,…
NASA Astrophysics Data System (ADS)
Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.
1993-05-01
Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.
NASA Technical Reports Server (NTRS)
Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.
1993-01-01
Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.
The effects of monitoring environment on problem-solving performance.
Laird, Brian K; Bailey, Charles D; Hester, Kim
2018-01-01
While effective and efficient solving of everyday problems is important in business domains, little is known about the effects of workplace monitoring on problem-solving performance. In a laboratory experiment, we explored the monitoring environment's effects on an individual's propensity to (1) establish pattern solutions to problems, (2) recognize when pattern solutions are no longer efficient, and (3) solve complex problems. Under three work monitoring regimes-no monitoring, human monitoring, and electronic monitoring-114 participants solved puzzles for monetary rewards. Based on research related to worker autonomy and theory of social facilitation, we hypothesized that monitored (versus non-monitored) participants would (1) have more difficulty finding a pattern solution, (2) more often fail to recognize when the pattern solution is no longer efficient, and (3) solve fewer complex problems. Our results support the first two hypotheses, but in complex problem solving, an interaction was found between self-assessed ability and the monitoring environment.
ERIC Educational Resources Information Center
de Leeuw, L.
Sixty-four fifth and sixth-grade pupils were taught number series extrapolation by either an algorithm, fully prescribed problem-solving method or a heuristic, less prescribed method. The trained problems were within categories of two degrees of complexity. There were 16 subjects in each cell of the 2 by 2 design used. Aptitude Treatment…
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-man
2012-01-01
Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…
Phase transitions in Pareto optimal complex networks
NASA Astrophysics Data System (ADS)
Seoane, Luís F.; Solé, Ricard
2015-09-01
The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.
Wavelet multiresolution complex network for decoding brain fatigued behavior from P300 signals
NASA Astrophysics Data System (ADS)
Gao, Zhong-Ke; Wang, Zi-Bo; Yang, Yu-Xuan; Li, Shan; Dang, Wei-Dong; Mao, Xiao-Qian
2018-09-01
Brain-computer interface (BCI) enables users to interact with the environment without relying on neural pathways and muscles. P300 based BCI systems have been extensively used to achieve human-machine interaction. However, the appearance of fatigue symptoms during operation process leads to the decline in classification accuracy of P300. Characterizing brain cognitive process underlying normal and fatigue conditions constitutes a problem of vital importance in the field of brain science. We in this paper propose a novel wavelet decomposition based complex network method to efficiently analyze the P300 signals recorded in the image stimulus test based on classical 'Oddball' paradigm. Initially, multichannel EEG signals are decomposed into wavelet coefficient series. Then we construct complex network by treating electrodes as nodes and determining the connections according to the 2-norm distances between wavelet coefficient series. The analysis of topological structure and statistical index indicates that the properties of brain network demonstrate significant distinctions between normal status and fatigue status. More specifically, the brain network reconfiguration in response to the cognitive task in fatigue status is reflected as the enhancement of the small-worldness.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
Chen, Li-Fen; Lin, Ching-En; Chou, Yu-Ching; Mao, Wei-Chung; Chen, Yi-Chyan; Tzeng, Nian-Sheng
2013-01-01
Objective Complex sleep behaviors (CSBs) are classified as “parasomnias” in the International Classifcation of Sleep Disorders, Second Edition (ICSD-2). To realize the potential danger after taking two short-acting Z-hypnosedative drugs, we estimated the incidence of CSBs in nonpsychotic patients in Taiwan. Methods Subjects (N = 1,220) using zolpidem or zopiclone were enrolled from the psychiatric outpatient clinics of a medical center in Taiwan over a 16-month period in 2006–2007. Subjects with zolpidem (N = 1,132) and subjects with zopiclone (N = 88) were analyzed. All subjects completed a questionnaire that included demographic data and complex sleep behaviors after taking hypnotics. Results Among zolpidem and zopiclone users, 3.28% of patients reported incidents of somnambulism or amnesic sleep-related behavior problems. The incidence of CSBs with zolpidem and zopiclone were 3.27%, and 3.41%, respectively, which was signifcantly lower than other studies in Taiwan. Conclusion These results serve as a reminder for clinicians to make inquiries regarding any unusual performance of parasomnic activities when prescribing zolpidem or zopiclone. PMID:23976857
Core regulatory network motif underlies the ocellar complex patterning in Drosophila melanogaster
NASA Astrophysics Data System (ADS)
Aguilar-Hidalgo, D.; Lemos, M. C.; Córdoba, A.
2015-03-01
During organogenesis, developmental programs governed by Gene Regulatory Networks (GRN) define the functionality, size and shape of the different constituents of living organisms. Robustness, thus, is an essential characteristic that GRNs need to fulfill in order to maintain viability and reproducibility in a species. In the present work we analyze the robustness of the patterning for the ocellar complex formation in Drosophila melanogaster fly. We have systematically pruned the GRN that drives the development of this visual system to obtain the minimum pathway able to satisfy this pattern. We found that the mechanism underlying the patterning obeys to the dynamics of a 3-nodes network motif with a double negative feedback loop fed by a morphogenetic gradient that triggers the inhibition in a French flag problem fashion. A Boolean modeling of the GRN confirms robustness in the patterning mechanism showing the same result for different network complexity levels. Interestingly, the network provides a steady state solution in the interocellar part of the patterning and an oscillatory regime in the ocelli. This theoretical result predicts that the ocellar pattern may underlie oscillatory dynamics in its genetic regulation.
Studies in nonlinear problems of energy. Progress report, October 1, 1993--September 30, 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matkowsky, B.J.
1994-09-01
The authors concentrate on modeling, analysis and large scale scientific computation of combustion and flame propagation phenomena, with emphasis on the transition from laminar to turbulent combustion. In the transition process a flame passed through a stages exhibiting increasingly complex spatial and temporal patterns which serve as signatures identifying each stage. Often the transitions arise via bifurcation. The authors investigate nonlinear dynamics, bifurcation and pattern formation in the successive stage of transition. They describe the stability of combustion waves, and transitions to combustion waves exhibiting progressively higher degrees of spatio-temporal complexity. One aspect of this research program is the systematicmore » derivation of appropriate, approximate models from the original models governing combustion. The approximate models are then analyzed. The authors are particularly interested in understanding the basic mechanisms affecting combustion, which is a prerequisite to effective control of the process. They are interested in determining the effects of varying various control parameters, such as Nusselt number, Lewis number, heat release, activation energy, Damkohler number, Reynolds number, Prandtl number, Peclet number, etc. The authors have also considered a number of problems in self-propagating high-temperature synthesis (SHS), in which combustion waves are employed to synthesize advanced materials. Efforts are directed toward understanding fundamental mechanisms. 167 refs.« less
Integrated Resource Planning Model (IRPM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, T. B.
2010-04-01
The Integrated Resource Planning Model (IRPM) is a decision-support software product for resource-and-capacity planning. Users can evaluate changing constraints on schedule performance, projected cost, and resource use. IRPM is a unique software tool that can analyze complex business situations from a basic supply chain to an integrated production facility to a distributed manufacturing complex. IRPM can be efficiently configured through a user-friendly graphical interface to rapidly provide charts, graphs, tables, and/or written results to summarize postulated business scenarios. There is not a similar integrated resource planning software package presently available. Many different businesses (from government to large corporations as wellmore » as medium-to-small manufacturing concerns) could save thousands of dollars and hundreds of labor hours in resource and schedule planning costs. Those businesses also could avoid millions of dollars of revenue lost from fear of overcommitting or from penalties and lost future business for failing to meet promised delivery by using IRPM to perform what-if business-case evaluations. Tough production planning questions that previously were left unanswered can now be answered with a high degree of certainty. Businesses can anticipate production problems and have solutions in hand to deal with those problems. IRPM allows companies to make better plans, decisions, and investments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Homer; Ashok Varikuti; Xinming Ou
Various tools exist to analyze enterprise network systems and to produce attack graphs detailing how attackers might penetrate into the system. These attack graphs, however, are often complex and difficult to comprehend fully, and a human user may find it problematic to reach appropriate configuration decisions. This paper presents methodologies that can 1) automatically identify portions of an attack graph that do not help a user to understand the core security problems and so can be trimmed, and 2) automatically group similar attack steps as virtual nodes in a model of the network topology, to immediately increase the understandability ofmore » the data. We believe both methods are important steps toward improving visualization of attack graphs to make them more useful in configuration management for large enterprise networks. We implemented our methods using one of the existing attack-graph toolkits. Initial experimentation shows that the proposed approaches can 1) significantly reduce the complexity of attack graphs by trimming a large portion of the graph that is not needed for a user to understand the security problem, and 2) significantly increase the accessibility and understandability of the data presented in the attack graph by clearly showing, within a generated visualization of the network topology, the number and type of potential attacks to which each host is exposed.« less
Translation Analysis on Civil Engineering Text Produced by Machine Translator
NASA Astrophysics Data System (ADS)
Sutopo, Anam
2018-02-01
Translation is extremely needed in communication since people have serious problem in the language used. Translation activity is done by the person in charge for translating the material. Translation activity is also able to be done by machine. It is called machine translation, reflected in the programs developed by programmer. One of them is Transtool. Many people used Transtool for helping them in solving the problem related with translation activities. This paper wants to deliver how important is the Transtool program, how effective is Transtool program and how is the function of Transtool for human business. This study applies qualitative research. The sources of data were document and informant. This study used documentation and in dept-interviewing as the techniques for collecting data. The collected data were analyzed by using interactive analysis. The results of the study show that, first; Transtool program is helpful for people in translating the civil engineering text and it functions as the aid or helper, second; the working of Transtool software program is effective enough and third; the result of translation produced by Transtool is good for short and simple sentences and not readable, not understandable and not accurate for long sentences (compound, complex and compound complex) thought the result is informative. The translated material must be edited by the professional translator.
Secure quantum private information retrieval using phase-encoded queries
NASA Astrophysics Data System (ADS)
Olejnik, Lukasz
2011-08-01
We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.100.230502 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.
Secure quantum private information retrieval using phase-encoded queries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olejnik, Lukasz
We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offersmore » substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.« less
The Social Process of Analyzing Real Water Resource Systems Plans and Management Policies
NASA Astrophysics Data System (ADS)
Loucks, Daniel
2016-04-01
Developing and applying systems analysis methods for improving the development and management of real world water resource systems, I have learned, is primarily a social process. This talk is a call for more recognition of this reality in the modeling approaches we propose in the papers and books we publish. The mathematical models designed to inform planners and managers of water systems that we see in many of our journals often seem more complex than they need be. They also often seem not as connected to reality as they could be. While it may be easier to publish descriptions of complex models than simpler ones, and while adding complexity to models might make them better able to mimic or resemble the actual complexity of the real physical and/or social systems or processes being analyzed, the usefulness of such models often can be an illusion. Sometimes the important features of reality that are of concern or interest to those who make decisions can be adequately captured using relatively simple models. Finding the right balance for the particular issues being addressed or the particular decisions that need to be made is an art. When applied to real world problems or issues in specific basins or regions, systems modeling projects often involve more attention to the social aspects than the mathematical ones. Mathematical models addressing connected interacting interdependent components of complex water systems are in fact some of the most useful methods we have to study and better understand the systems we manage around us. They can help us identify and evaluate possible alternative solutions to problems facing humanity today. The study of real world systems of interacting components using mathematical models is commonly called applied systems analyses. Performing such analyses with decision makers rather than of decision makers is critical if the needed trust between project personnel and their clients is to be developed. Using examples from recent and ongoing modeling projects in different parts of the world, this talk will attempt to show the dependency on the degree of project success with the degree of attention given to the communication between project personnel, the stakeholders and decision making institutions. It will also highlight how initial project terms-of-reference and expected outcomes can change, sometimes in surprising ways, during the course of such projects. Changing project objectives often result from changing stakeholder values, emphasizing the need for analyses that can adapt to this uncertainty.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
[Sexually transmitted diseases: the impact of stigma and taboo on current medical care].
Badura-Lotter, G
2014-04-01
Sexually transmitted diseases (STD) are probably the most tabooed diseases we know. The many taboos and the related stigmata shape patients' lives and significantly influence health care policies, medical research, and current problems in medical ethics. To better understand these complex influences, the still powerful taboos and related metaphors associated with illness and disease are analyzed within their cultural and historical background and concerning the actual impact on patient care and research. It becomes obvious that research and health care policies cannot be satisfyingly successful in helping people affected by STDs as long as these "nonscientific" factors are not taken into account.
Statistical Inference for Big Data Problems in Molecular Biophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Savol, Andrej; Burger, Virginia
2012-01-01
We highlight the role of statistical inference techniques in providing biological insights from analyzing long time-scale molecular simulation data. Technologi- cal and algorithmic improvements in computation have brought molecular simu- lations to the forefront of techniques applied to investigating the basis of living systems. While these longer simulations, increasingly complex reaching petabyte scales presently, promise a detailed view into microscopic behavior, teasing out the important information has now become a true challenge on its own. Mining this data for important patterns is critical to automating therapeutic intervention discovery, improving protein design, and fundamentally understanding the mech- anistic basis of cellularmore » homeostasis.« less
Simulation software: engineer processes before reengineering.
Lepley, C J
2001-01-01
People make decisions all the time using intuition. But what happens when you are asked: "Are you sure your predictions are accurate? How much will a mistake cost? What are the risks associated with this change?" Once a new process is engineered, it is difficult to analyze what would have been different if other options had been chosen. Simulating a process can help senior clinical officers solve complex patient flow problems and avoid wasted efforts. Simulation software can give you the data you need to make decisions. The author introduces concepts, methodologies, and applications of computer aided simulation to illustrate their use in making decisions to improve workflow design.
NASA Technical Reports Server (NTRS)
Ghil, M.
1980-01-01
A unified theoretical approach to both the four-dimensional assimilation of asynoptic data and the initialization problem is attempted. This approach relies on the derivation of certain relationships between geopotential tendencies and tendencies of the horizontal velocity field in primitive-equation models of atmospheric flow. The approach is worked out and analyzed in detail for some simple barotropic models. Certain independent results of numerical experiments for the time-continuous assimilation of real asynoptic meteorological data into a complex, baroclinic weather prediction model are discussed in the context of the present approach. Tentative inferences are drawn for practical assimilation procedures.
Aeroelastic-Acoustics Simulation of Flight Systems
NASA Technical Reports Server (NTRS)
Gupta, kajal K.; Choi, S.; Ibrahim, A.
2009-01-01
This paper describes the details of a numerical finite element (FE) based analysis procedure and a resulting code for the simulation of the acoustics phenomenon arising from aeroelastic interactions. Both CFD and structural simulations are based on FE discretization employing unstructured grids. The sound pressure level (SPL) on structural surfaces is calculated from the root mean square (RMS) of the unsteady pressure and the acoustic wave frequencies are computed from a fast Fourier transform (FFT) of the unsteady pressure distribution as a function of time. The resulting tool proves to be unique as it is designed to analyze complex practical problems, involving large scale computations, in a routine fashion.
Looi, Git-Marie Ejneborn; Sävenstedt, Stefan; Engström, Åsa
2016-01-01
The nurse-patient interaction is the cornerstone of psychiatric care, yet the concept "mental health nursing" is difficult to describe. This article aims to address this problem through the experiences of nursing students. Online journals from 14 nursing students were analyzed using qualitative content analysis, resulting in three categories: Trusting the Trusting Relationship, Voicing the Unspoken Needs, and Balancing the Dynamics of Doing and Being. This study demonstrates that providing nursing care based on trusting relationships is not a demanding task, but it takes place in a complex environment that has a tendency to make easy things complicated.
TRIZ theory in NEA photocathode preparation system
NASA Astrophysics Data System (ADS)
Qiao, Jianliang; Huang, Dayong; Li, Xiangjiang; Gao, Youtang
2016-09-01
The solutions to the engineering problems were provided according to the innovation principle based on the theory of TRIZ. The ultra high vacuum test and evaluation system for the preparation of negative electron affinity (NEA) photocathode has the characteristics of complex structure and powerful functions. Segmentation principle, advance function principle, curved surface principle, dynamic characteristics principle and nested principle adopted by the design of ultra high vacuum test and evaluation system for cathode preparation were analyzed. The applications of the physical contradiction and the substance-field analysis method of the theory of TRIZ in the cathode preparation ultra high vacuum test and evaluation system were discussed.
NASA Technical Reports Server (NTRS)
Clancey, William J.
2003-01-01
A human-centered approach to computer systems design involves reframing analysis in terms of people interacting with each other, not only human-machine interaction. The primary concern is not how people can interact with computers, but how shall we design computers to help people work together? An analysis of astronaut interactions with CapCom on Earth during one traverse of Apollo 17 shows what kind of information was conveyed and what might be automated today. A variety of agent and robotic technologies are proposed that deal with recurrent problems in communication and coordination during the analyzed traverse.
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
Exact solution of some linear matrix equations using algebraic methods
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1977-01-01
A study is done of solution methods for Linear Matrix Equations including Lyapunov's equation, using methods of modern algebra. The emphasis is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The action f sub BA is introduced a Basic Lemma is proven. The equation PA + BP = -C as well as the Lyapunov equation are analyzed. Algorithms are given for the solution of the Lyapunov and comment is given on its arithmetic complexity. The equation P - A'PA = Q is studied and numerical examples are given.
Application of harmonic detection technology in methane telemetry
NASA Astrophysics Data System (ADS)
Huo, Yuehua; Fan, Weiqiang
2017-08-01
Methane telemetry plays a vital role in ensuring the safe production of coal mines and monitoring the leakage of natural gas pipelines. Harmonic detection is the key technology of methane telemetry accuracy and sensitivity, but the current telemetry distance is short, the relationship between different modulation parameters is complex, and the harmonic signal is affected by noise interference. These factors seriously affect the development of harmonic detection technology. In this paper, the principle of methane telemetry based on harmonic detection technology is introduced. The present situation and characteristics of harmonic detection technology are expounded. The problems existing in harmonic detection are analyzed. Finally, the future development trend is discussed.
NASA Astrophysics Data System (ADS)
Nikonorov, Aleksandr; Terleev, Vitaly; Badenko, Vladimir; Mirschel, Wilfried; Abakumov, Evgeny; Ginevsky, Roman; Lazarev, Viktor; Togo, Issa; Volkova, Yulia; Melnichuk, Aleksandr; Dunaieva, Ielizaveta; Akimov, Luka
2017-10-01
The problem of flood protection measures are considered in the paper. The regulation of river flow by the system of Self-Regulated Flood Dams (SRFD) is analyzed. The method of SRFD modeling in GIS environment is proposed. The question of the ecological aspect of the SRFD management is considered based on the hydrophysical properties of the soil. The improved Mualem-Van Genuchted method is proposed for the evaluation of the possible SRFD location influence on the soil of flooded territory - the temporary reservoirs. The importance and utility of the proposed complex method is stated.
Bioethics: secular philosophy, Jewish law and modern medicine.
Steinberg, A
1989-07-01
The recent unprecedented expansion of scientific knowledge and the greater awareness and involvement of the public in medical matters, as well as additional causes described here, have impelled the development of a new form of bioethics over the past three decades. Jewish law and philosophy have always dealt with medical issues. In recent years, however, a voluminous body of literature devoted to Jewish medical ethics has developed. It covers all relevant issues and offers Jewish solutions to many complex problems arising from the recent scientific breakthroughs. This article analyzes the differences between Jewish and secular philosophies regarding fundamental moral theories relevant to modern medical ethics.
Transformations of software design and code may lead to reduced errors
NASA Technical Reports Server (NTRS)
Connelly, E. M.
1983-01-01
The capability of programmers and non-programmers to specify problem solutions by developing example-solutions and also for the programmers by writing computer programs was investigated; each method of specification was accomplished at various levels of problem complexity. The level of difficulty of each problem was reflected by the number of steps needed by the user to develop a solution. Machine processing of the user inputs permitted inferences to be developed about the algorithms required to solve a particular problem. The interactive feedback of processing results led users to a more precise definition of the desired solution. Two participant groups (programmers and bookkeepers/accountants) working with three levels of problem complexity and three levels of processor complexity were used. The experimental task employed required specification of a logic for solution of a Navy task force problem.
Hoskinson, A-M; Caballero, M D; Knight, J K
2013-06-01
If students are to successfully grapple with authentic, complex biological problems as scientists and citizens, they need practice solving such problems during their undergraduate years. Physics education researchers have investigated student problem solving for the past three decades. Although physics and biology problems differ in structure and content, the instructional purposes align closely: explaining patterns and processes in the natural world and making predictions about physical and biological systems. In this paper, we discuss how research-supported approaches developed by physics education researchers can be adopted by biologists to enhance student problem-solving skills. First, we compare the problems that biology students are typically asked to solve with authentic, complex problems. We then describe the development of research-validated physics curricula emphasizing process skills in problem solving. We show that solving authentic, complex biology problems requires many of the same skills that practicing physicists and biologists use in representing problems, seeking relationships, making predictions, and verifying or checking solutions. We assert that acquiring these skills can help biology students become competent problem solvers. Finally, we propose how biology scholars can apply lessons from physics education in their classrooms and inspire new studies in biology education research.
Winickoff, David E; Mondou, Matthieu
2017-02-01
While there is ample scholarly work on regulatory science within the state, or single-sited global institutions, there is less on its operation within complex modes of global governance that are decentered, overlapping, multi-sectorial and multi-leveled. Using a co-productionist framework, this study identifies 'epistemic jurisdiction' - the power to produce or warrant technical knowledge for a given political community, topical arena or geographical territory - as a central problem for regulatory science in complex governance. We explore these dynamics in the arena of global sustainability standards for biofuels. We select three institutional fora as sites of inquiry: the European Union's Renewable Energy Directive, the Roundtable on Sustainable Biomaterials, and the International Organization for Standardization. These cases allow us to analyze how the co-production of sustainability science responds to problems of epistemic jurisdiction in the global regulatory order. First, different problems of epistemic jurisdiction beset different standard-setting bodies, and these problems shape both the content of regulatory science and the procedures designed to make it authoritative. Second, in order to produce global regulatory science, technical bodies must manage an array of conflicting imperatives - including scientific virtue, due process and the need to recruit adoptees to perpetuate the standard. At different levels of governance, standard drafters struggle to balance loyalties to country, to company or constituency and to the larger project of internationalization. Confronted with these sometimes conflicting pressures, actors across the standards system quite self-consciously maneuver to build or retain authority for their forum through a combination of scientific adjustment and political negotiation. Third, the evidentiary demands of regulatory science in global administrative spaces are deeply affected by 1) a market for standards, in which firms and states can choose the cheapest sustainability certification, and 2) the international trade regime, in which the long shadow of WTO law exerts a powerful disciplining function.
Variational data assimilation for the initial-value dynamo problem.
Li, Kuan; Jackson, Andrew; Livermore, Philip W
2011-11-01
The secular variation of the geomagnetic field as observed at the Earth's surface results from the complex magnetohydrodynamics taking place in the fluid core of the Earth. One way to analyze this system is to use the data in concert with an underlying dynamical model of the system through the technique of variational data assimilation, in much the same way as is employed in meteorology and oceanography. The aim is to discover an optimal initial condition that leads to a trajectory of the system in agreement with observations. Taking the Earth's core to be an electrically conducting fluid sphere in which convection takes place, we develop the continuous adjoint forms of the magnetohydrodynamic equations that govern the dynamical system together with the corresponding numerical algorithms appropriate for a fully spectral method. These adjoint equations enable a computationally fast iterative improvement of the initial condition that determines the system evolution. The initial condition depends on the three dimensional form of quantities such as the magnetic field in the entire sphere. For the magnetic field, conservation of the divergence-free condition for the adjoint magnetic field requires the introduction of an adjoint pressure term satisfying a zero boundary condition. We thus find that solving the forward and adjoint dynamo system requires different numerical algorithms. In this paper, an efficient algorithm for numerically solving this problem is developed and tested for two illustrative problems in a whole sphere: one is a kinematic problem with prescribed velocity field, and the second is associated with the Hall-effect dynamo, exhibiting considerable nonlinearity. The algorithm exhibits reliable numerical accuracy and stability. Using both the analytical and the numerical techniques of this paper, the adjoint dynamo system can be solved directly with the same order of computational complexity as that required to solve the forward problem. These numerical techniques form a foundation for ultimate application to observations of the geomagnetic field over the time scale of centuries.
Analogy as a strategy for supporting complex problem solving under uncertainty.
Chan, Joel; Paletz, Susannah B F; Schunn, Christian D
2012-11-01
Complex problem solving in naturalistic environments is fraught with uncertainty, which has significant impacts on problem-solving behavior. Thus, theories of human problem solving should include accounts of the cognitive strategies people bring to bear to deal with uncertainty during problem solving. In this article, we present evidence that analogy is one such strategy. Using statistical analyses of the temporal dynamics between analogy and expressed uncertainty in the naturalistic problem-solving conversations among scientists on the Mars Rover Mission, we show that spikes in expressed uncertainty reliably predict analogy use (Study 1) and that expressed uncertainty reduces to baseline levels following analogy use (Study 2). In addition, in Study 3, we show with qualitative analyses that this relationship between uncertainty and analogy is not due to miscommunication-related uncertainty but, rather, is primarily concentrated on substantive problem-solving issues. Finally, we discuss a hypothesis about how analogy might serve as an uncertainty reduction strategy in naturalistic complex problem solving.
Solving complex band structure problems with the FEAST eigenvalue algorithm
NASA Astrophysics Data System (ADS)
Laux, S. E.
2012-08-01
With straightforward extension, the FEAST eigenvalue algorithm [Polizzi, Phys. Rev. B 79, 115112 (2009)] is capable of solving the generalized eigenvalue problems representing traveling-wave problems—as exemplified by the complex band-structure problem—even though the matrices involved are complex, non-Hermitian, and singular, and hence outside the originally stated range of applicability of the algorithm. The obtained eigenvalues/eigenvectors, however, contain spurious solutions which must be detected and removed. The efficiency and parallel structure of the original algorithm are unaltered. The complex band structures of Si layers of varying thicknesses and InAs nanowires of varying radii are computed as test problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya
The sudden release of toxic contaminants that reach indoor spaces can be hazardousto building occupants. To respond effectively, the contaminant release must be quicklydetected and characterized to determine unobserved parameters, such as release locationand strength. Characterizing the release requires solving an inverse problem. Designinga robust real-time sensor system that solves the inverse problem is challenging becausethe fate and transport of contaminants is complex, sensor information is limited andimperfect, and real-time estimation is computationally constrained.This dissertation uses a system-level approach, based on a Bayes Monte Carloframework, to develop sensor-system design concepts and methods. I describe threeinvestigations that explore complex relationships amongmore » sensors, network architecture,interpretation algorithms, and system performance. The investigations use data obtainedfrom tracer gas experiments conducted in a real building. The influence of individual sensor characteristics on the sensor-system performance for binary-type contaminant sensors is analyzed. Performance tradeoffs among sensor accuracy, threshold level and response time are identified; these attributes could not be inferred without a system-level analysis. For example, more accurate but slower sensors are found to outperform less accurate but faster sensors. Secondly, I investigate how the sensor-system performance can be understood in terms of contaminant transport processes and the model representation that is used to solve the inverse problem. The determination of release location and mass are shown to be related to and constrained by transport and mixing time scales. These time scales explain performance differences among different sensor networks. For example, the effect of longer sensor response times is comparably less for releases with longer mixing time scales. The third investigation explores how information fusion from heterogeneous sensors may improve the sensor-system performance and offset the need for more contaminant sensors. Physics- and algorithm-based frameworks are presented for selecting and fusing information from noncontaminant sensors. The frameworks are demonstrated with door-position sensors, which are found to be more useful in natural airflow conditions, but which cannot compensate for poor placement of contaminant sensors. The concepts and empirical findings have the potential to help in the design of sensor systems for more complex building systems. The research has broader relevance to additional environmental monitoring problems, fault detection and diagnostics, and system design.« less
NASA Astrophysics Data System (ADS)
Garcia, Jose Luis
2000-10-01
In injection molding processes, computer aided engineering (CAE) allows processors to evaluate different process parameters in order to achieve complete filling of a cavity and, in some cases, it predicts shrinkage and warpage. However, because commercial computational packages are used to design complex geometries, detail in the thickness direction is limited. Approximations in the thickness direction lead to the solution of a 2½-D problem instead of a 3-D problem. These simplifications drastically reduce computational times and memory requirements. However, these approximations hinder the ability to predict thermal and/or mechanical degradation. The goal of this study was to determine the degree of degradation during PVC injection molding and to compare the results with a computational model. Instead of analyzing degradation in complex geometries, the computational analysis and injection molding trials were performed on typical sections found in complex geometries, such as flow in a tube, flow in a rectangular channel, and radial flow. This simplification reduces the flow problem to a 1-D problem and allows one to develop a computational model with a higher level of detail in the thickness direction, essential for the determination of degradation. Two different geometries were examined in this study: a spiral mold, in order to approximate the rectangular channel, and a center gated plate for the radial flow. Injection speed, melt temperature, and shot size were varied. Parts varying in degree of degradation, from no to severe degradation, were produced to determine possible transition points. Furthermore, two different PVC materials were used, low and high viscosity, M3800 and M4200, respectively (The Geon Company, Avon Lake, OH), to correlate the degree of degradation with the viscous heating observed during injection. It was found that a good agreement between experimental and computational results was obtained only if the reaction was assumed to be more thermally sensitive than found in literature. The results from this study show that, during injection, the activation energy for degradation was 65 kcal/mol, compared to 17--30 kcal/mol found in literature for quiescent systems.
Integrated research in natural resources: the key role of problem framing.
Roger N. Clark; George H. Stankey
2006-01-01
Integrated research is about achieving holistic understanding of complex biophysical and social issues and problems. It is driven by the need to improve understanding about such systems and to improve resource management by using the results of integrated research processes.Traditional research tends to fragment complex problems, focusing more on the pieces...
An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups
ERIC Educational Resources Information Center
Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi
2012-01-01
The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…
The solution of the optimization problem of small energy complexes using linear programming methods
NASA Astrophysics Data System (ADS)
Ivanin, O. A.; Director, L. B.
2016-11-01
Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.
Kempermann, Gerd
2017-01-01
The Cynefin scheme is a concept of knowledge management, originally devised to support decision making in management, but more generally applicable to situations, in which complexity challenges the quality of insight, prediction, and decision. Despite the fact that life itself, and especially the brain and its diseases, are complex to the extent that complexity could be considered their cardinal feature, complex problems in biomedicine are often treated as if they were actually not more than the complicated sum of solvable sub-problems. Because of the emergent properties of complex contexts this is not correct. With a set of clear criteria Cynefin helps to set apart complex problems from "simple/obvious," "complicated," "chaotic," and "disordered" contexts in order to avoid misinterpreting the relevant causality structures. The distinction comes with the insight, which specific kind of knowledge is possible in each of these categories and what are the consequences for resulting decisions and actions. From student's theses over the publication and grant writing process to research politics, misinterpretation of complexity can have problematic or even dangerous consequences, especially in clinical contexts. Conceptualization of problems within a straightforward reference language like Cynefin improves clarity and stringency within projects and facilitates communication and decision-making about them.
Optimal damper placement research
NASA Astrophysics Data System (ADS)
Smirnov, Vladimir; Kuzhin, Bulat
2017-10-01
Nowadays increased noise and vibration pollution on technopark and research laboratories territories, which is negatively influencing on production of high-precision measuring instruments. The problem is actual for transport hubs, which experience influence of machines, vehicles, trains and planes. Energy efficiency is one of major functions in modern road transport development. The problem of environmental pollution, lack of energy resources and energy efficiency requires research, production and implementation of energy efficient materials that would be the foundation of environmentally sustainable transport infrastructure in road traffic. Improving the efficiency of energy use is a leading option to gain better energy security, improve industry profitability and competitiveness, and reduce the overall energy sector impacts on climate change. This paper has next indirect goals. Research impact of vibration on constructions, such as bus and train stations, terminals, which are mostly exposed to oscillation. Extend the buildings operation by decreasing the negative influence. Reduce expenses on maintenance and repair works. It is important not to forget about seismic protection, which is actual nowadays, when the safety stands first. Analysis of devastating earthquakes for last few years proves reasonableness of application such systems. The article is dedicated to learning dependence of damper location on natural frequency. As a model for analyze was simulated concrete construction with variable profile. We used program complex Patran for analyzing the model.
Does the cost function matter in Bayes decision rule?
Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann
2012-02-01
In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.
NASA Technical Reports Server (NTRS)
Contreras, Michael T.; Peng, Chia-Yen; Wang, Dongdong; Chen, Jiun-Shyan
2012-01-01
A wheel experiencing sinkage and slippage events poses a high risk to rover missions as evidenced by recent mobility challenges on the Mars Exploration Rover (MER) project. Because several factors contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc., there are significant benefits to modeling these events to a sufficient degree of complexity. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree finite element approaches enable simulations that capture sufficient detail of wheel-soil interaction while remaining computationally feasible. This study demonstrates some of the large deformation modeling capability of meshfree methods and the realistic solutions obtained by accounting for the soil material properties. A benchmark wheel-soil interaction problem is developed and analyzed using a specific class of meshfree methods called Reproducing Kernel Particle Method (RKPM). The benchmark problem is also analyzed using a commercially available finite element approach with Lagrangian meshing for comparison. RKPM results are comparable to classical pressure-sinkage terramechanics relationships proposed by Bekker-Wong. Pending experimental calibration by future work, the meshfree modeling technique will be a viable simulation tool for trade studies assisting rover wheel design.
Neural-network-based state of health diagnostics for an automated radioxenon sampler/analyzer
NASA Astrophysics Data System (ADS)
Keller, Paul E.; Kangas, Lars J.; Hayes, James C.; Schrom, Brian T.; Suarez, Reynold; Hubbard, Charles W.; Heimbigner, Tom R.; McIntyre, Justin I.
2009-05-01
Artificial neural networks (ANNs) are used to determine the state-of-health (SOH) of the Automated Radioxenon Analyzer/Sampler (ARSA). ARSA is a gas collection and analysis system used for non-proliferation monitoring in detecting radioxenon released during nuclear tests. SOH diagnostics are important for automated, unmanned sensing systems so that remote detection and identification of problems can be made without onsite staff. Both recurrent and feed-forward ANNs are presented. The recurrent ANN is trained to predict sensor values based on current valve states, which control air flow, so that with only valve states the normal SOH sensor values can be predicted. Deviation between modeled value and actual is an indication of a potential problem. The feed-forward ANN acts as a nonlinear version of principal components analysis (PCA) and is trained to replicate the normal SOH sensor values. Because of ARSA's complexity, this nonlinear PCA is better able to capture the relationships among the sensors than standard linear PCA and is applicable to both sensor validation and recognizing off-normal operating conditions. Both models provide valuable information to detect impending malfunctions before they occur to avoid unscheduled shutdown. Finally, the ability of ANN methods to predict the system state is presented.
Zhu, Zaifang; Chen, Huang; Ren, Jiangtao; Lu, Juan J; Gu, Congying; Lynch, Kyle B; Wu, Si; Wang, Zhe; Cao, Chengxi; Liu, Shaorong
2018-03-01
We develop a new two-dimensional (2D) high performance liquid chromatography (HPLC) approach for intact protein analysis. Development of 2D HPLC has a bottleneck problem - limited second-dimension (second-D) separation speed. We solve this problem by incorporating multiple second-D columns to allow several second-D separations to be proceeded in parallel. To demonstrate the feasibility of using this approach for comprehensive protein analysis, we select ion-exchange chromatography as the first-dimension and reverse-phase chromatography as the second-D. We incorporate three second-D columns in an innovative way so that three reverse-phase separations can be performed simultaneously. We test this system for separating both standard proteins and E. coli lysates and achieve baseline resolutions for eleven standard proteins and obtain more than 500 peaks for E. coli lysates. This is an indication that the sample complexities are greatly reduced. We see less than 10 bands when each fraction of the second-D effluents are analyzed by sodium dodecyl sulfate - polyacrylamide gel electrophoresis (SDS-PAGE), compared to hundreds of SDS-PAGE bands as the original sample is analyzed. This approach could potentially be an excellent and general tool for protein analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Regression modeling of ground-water flow
Cooley, R.L.; Naff, R.L.
1985-01-01
Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)
Multivariate analysis in thoracic research.
Mengual-Macenlle, Noemí; Marcos, Pedro J; Golpe, Rafael; González-Rivas, Diego
2015-03-01
Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use.
Hoffmann, Michael; Borenstein, Jason
2014-03-01
As a committee of the National Academy of Engineering recognized, ethics education should foster the ability of students to analyze complex decision situations and ill-structured problems. Building on the NAE's insights, we report about an innovative teaching approach that has two main features: first, it places the emphasis on deliberation and on self-directed, problem-based learning in small groups of students; and second, it focuses on understanding ill-structured problems. The first innovation is motivated by an abundance of scholarly research that supports the value of deliberative learning practices. The second results from a critique of the traditional case-study approach in engineering ethics. A key problem with standard cases is that they are usually described in such a fashion that renders the ethical problem as being too obvious and simplistic. The practitioner, by contrast, may face problems that are ill-structured. In the collaborative learning environment described here, groups of students use interactive and web-based argument visualization software called "AGORA-net: Participate - Deliberate!". The function of the software is to structure communication and problem solving in small groups. Students are confronted with the task of identifying possible stakeholder positions and reconstructing their legitimacy by constructing justifications for these positions in the form of graphically represented argument maps. The argument maps are then presented in class so that these stakeholder positions and their respective justifications become visible and can be brought into a reasoned dialogue. Argument mapping provides an opportunity for students to collaborate in teams and to develop critical thinking and argumentation skills.
Maron, Bradley A; Leopold, Jane A
2016-09-30
Reductionist theory proposes that analyzing complex systems according to their most fundamental components is required for problem resolution, and has served as the cornerstone of scientific methodology for more than four centuries. However, technological gains in the current scientific era now allow for the generation of large datasets that profile the proteomic, genomic, and metabolomic signatures of biological systems across a range of conditions. The accessibility of data on such a vast scale has, in turn, highlighted the limitations of reductionism, which is not conducive to analyses that consider multiple and contemporaneous interactions between intermediates within a pathway or across constructs. Systems biology has emerged as an alternative approach to analyze complex biological systems. This methodology is based on the generation of scale-free networks and, thus, provides a quantitative assessment of relationships between multiple intermediates, such as protein-protein interactions, within and between pathways of interest. In this way, systems biology is well positioned to identify novel targets implicated in the pathogenesis or treatment of diseases. In this review, the historical root and fundamental basis of systems biology, as well as the potential applications of this methodology are discussed with particular emphasis on integration of these concepts to further understanding of cardiovascular disorders such as coronary artery disease and pulmonary hypertension.
A design of spectrophotometric microfluidic chip sensor for analyzing silicate in seawater
NASA Astrophysics Data System (ADS)
Cao, X.; Zhang, S. W.; Chu, D. Z.; Wu, N.; Ma, H. K.; Liu, Y.
2017-08-01
High quality and continuous in situ silicate data are required to investigate the mechanism of the biogeochemical cycles and the formation of red tide. There is an urgently growing need for autonomous in situ silicate instruments that perform determination on various platforms. However, due to the high reagents and power consumption, as well as high system complexity leading to low reliability and robustness, the performance of the commercially available silicate sensors is not satisfactory. With these problems, here we present a new generation of microfluidic continuous flow analysis silicate sensor with sufficient analytical performance and robustness, for in situ determination of soluble silicate in seawater. The reaction mechanism of this sensor is based on the reaction of silicate with ammonium molybdate to form a yellow silicomolybdate complex and further reduction to silicomoIybdenum blue by ascorbic acid. The minimum limit of detection was 45.1 nmol L-1, and the linear determination range of the sensor is 0-400 μmol L-1. The recovery rate of the actual water is between 98.1%-104.0%, and the analyzing cycle of the sensor is about 5 minutes. This sensor has the advantages of high accuracy, high integration, low water consumption, and strong anti-interference ability. It has been successfully applied to measuring the silicate in seawater in Jiaozhou Bay.