The Bright Side of Being Blue: Depression as an Adaptation for Analyzing Complex Problems
ERIC Educational Resources Information Center
Andrews, Paul W.; Thomson, J. Anderson, Jr.
2009-01-01
Depression is the primary emotional condition for which help is sought. Depressed people often report persistent rumination, which involves analysis, and complex social problems in their lives. Analysis is often a useful approach for solving complex problems, but it requires slow, sustained processing, so disruption would interfere with problem…
The Bright Side of Being Blue: Depression as an Adaptation for Analyzing Complex Problems
ERIC Educational Resources Information Center
Andrews, Paul W.; Thomson, J. Anderson, Jr.
2009-01-01
Depression is the primary emotional condition for which help is sought. Depressed people often report persistent rumination, which involves analysis, and complex social problems in their lives. Analysis is often a useful approach for solving complex problems, but it requires slow, sustained processing, so disruption would interfere with problem…
The bright side of being blue: Depression as an adaptation for analyzing complex problems
Andrews, Paul W.; Thomson, J. Anderson
2009-01-01
Depression ranks as the primary emotional problem for which help is sought. Depressed people often have severe, complex problems, and rumination is a common feature. Depressed people often believe that their ruminations give them insight into their problems, but clinicians often view depressive rumination as pathological because it is difficult to disrupt and interferes with the ability to concentrate on other things. Abundant evidence indicates that depressive rumination involves the analysis of episode-related problems. Because analysis is time consuming and requires sustained processing, disruption would interfere with problem-solving. The analytical rumination (AR) hypothesis proposes that depression is an adaptation that evolved as a response to complex problems and whose function is to minimize disruption of rumination and sustain analysis of complex problems. It accomplishes this by giving episode-related problems priority access to limited processing resources, by reducing the desire to engage in distracting activities (anhedonia), and by producing psychomotor changes that reduce exposure to distracting stimuli. Because processing resources are limited, the inability to concentrate on other things is a tradeoff that must be made to sustain analysis of the triggering problem. The AR hypothesis is supported by evidence from many levels, including genes, neurotransmitters and their receptors, neurophysiology, neuroanatomy, neuroenergetics, pharmacology, cognition and behavior, and the efficacy of treatments. In addition, we address and provide explanations for puzzling findings in the cognitive and behavioral genetics literatures on depression. In the process, we challenge the belief that serotonin transmission is low in depression. Finally, we discuss implications of the hypothesis for understanding and treating depression. PMID:19618990
Chen, Tianshi; He, Jun; Sun, Guangzhong; Chen, Guoliang; Yao, Xin
2009-10-01
In the past decades, many theoretical results related to the time complexity of evolutionary algorithms (EAs) on different problems are obtained. However, there is not any general and easy-to-apply approach designed particularly for population-based EAs on unimodal problems. In this paper, we first generalize the concept of the takeover time to EAs with mutation, then we utilize the generalized takeover time to obtain the mean first hitting time of EAs and, thus, propose a general approach for analyzing EAs on unimodal problems. As examples, we consider the so-called (N + N) EAs and we show that, on two well-known unimodal problems, leadingones and onemax , the EAs with the bitwise mutation and two commonly used selection schemes both need O(n ln n + n(2)/N) and O(n ln ln n + n ln n/N) generations to find the global optimum, respectively. Except for the new results above, our approach can also be applied directly for obtaining results for some population-based EAs on some other unimodal problems. Moreover, we also discuss when the general approach is valid to provide us tight bounds of the mean first hitting times and when our approach should be combined with problem-specific knowledge to get the tight bounds. It is the first time a general idea for analyzing population-based EAs on unimodal problems is discussed theoretically.
Analyzing Complex Survey Data.
ERIC Educational Resources Information Center
Rodgers-Farmer, Antoinette Y.; Davis, Diane
2001-01-01
Uses data from the 1994 AIDS Knowledge and Attitudes Supplement to the National Health Interview Survey (NHIS) to illustrate that biased point estimates, inappropriate standard errors, and misleading tests of significance can result from using traditional software packages, such as SPSS or SAS, for complex survey analysis. (BF)
Implementation of Complexity Analyzing Based on Additional Effect
NASA Astrophysics Data System (ADS)
Zhang, Peng; Li, Na; Liang, Yanhong; Liu, Fang
According to the Complexity Theory, there is complexity in the system when the functional requirement is not be satisfied. There are several study performances for Complexity Theory based on Axiomatic Design. However, they focus on reducing the complexity in their study and no one focus on method of analyzing the complexity in the system. Therefore, this paper put forth a method of analyzing the complexity which is sought to make up the deficiency of the researches. In order to discussing the method of analyzing the complexity based on additional effect, this paper put forth two concepts which are ideal effect and additional effect. The method of analyzing complexity based on additional effect combines Complexity Theory with Theory of Inventive Problem Solving (TRIZ). It is helpful for designers to analyze the complexity by using additional effect. A case study shows the application of the process.
Analyzing Static Loading of Complex Structures
NASA Technical Reports Server (NTRS)
Gallear, D. C.
1986-01-01
Critical loading conditions determined from analysis of each structural element. Automated Thrust Structures Loads and Stresses (ATLAS) system is series of programs developed to analyze elements of complex structure under static-loading conditions. ATLAS calculates internal loads, beam-bending loads, column- and web-buckling loads, beam and panel stresses, and beam-corner stresses. Programs written in FORTRAN IV and Assembler for batch execution.
On a Procedure for Analyzing Certain Problems of Diffusion Theory.
PARTIAL DIFFERENTIAL EQUATIONS, DIFFUSION ), BOUNDARY VALUE PROBLEMS, BOUNDARY VALUE PROBLEMS, INTEGRAL TRANSFORMS, COMPLEX VARIABLES, CONDUCTION(HEAT TRANSFER), ELECTRICAL CONDUCTIVITY, FLUID FLOW, BESSEL FUNCTIONS
Analyzing, solving offshore seawater injection problems
Al-Rubale, J.S.; Muhsin, A.A.; Shaker, H.A.; Washash, I.
1988-01-01
Changes in seawater treatment, necessary cleaning of injection lines, and modifying well completion practices have reduced injection well plugging on pressure maintenance projects operated by Abu Dhabi Marine Operating Co., (Adma-Opco) in Zakum and Umm Shaif fields, offshore Abu Dhabi, in the Arabian Gulf. Plugging was caused primarily by iron sulfide and corrosion products that were displaced down hole after being formed in the water distribution system. These materials, in turn, resulted from O/sub 2/ inadvertently entering the injection system where it combined with corrosive H/sub 2/S generated by sulfate-reducing bacteria. The problem was further compounded by debris peeling from the interior of well tubulars, a high solids content of brine used to complete injectors, and slime formation in injection pipe lines. Acidizing wells proved a quick method for partially restoring injectivity, but a continuing concerted effort is being made to achieve more permanent results by eliminating the O/sub 2/ and H/sub 2/S, which are at the root of the difficulty.
Analyzing and Detecting Problems in Systems of Systems
NASA Technical Reports Server (NTRS)
Lindvall, Mikael; Ackermann, Christopher; Stratton, William C.; Sibol, Deane E.; Godfrey, Sally
2008-01-01
Many software systems are evolving complex system of systems (SoS) for which inter-system communication is mission-critical. Evidence indicates that transmission failures and performance issues are not uncommon occurrences. In a NASA-supported Software Assurance Research Program (SARP) project, we are researching a new approach addressing such problems. In this paper, we are presenting an approach for analyzing inter-system communications with the goal to uncover both transmission errors and performance problems. Our approach consists of a visualization and an evaluation component. While the visualization of the observed communication aims to facilitate understanding, the evaluation component automatically checks the conformance of an observed communication (actual) to a desired one (planned). The actual and the planned are represented as sequence diagrams. The evaluation algorithm checks the conformance of the actual to the planned diagram. We have applied our approach to the communication of aerospace systems and were successful in detecting and resolving even subtle and long existing transmission problems.
Software Analyzes Complex Systems in Real Time
NASA Technical Reports Server (NTRS)
2008-01-01
Expert system software programs, also known as knowledge-based systems, are computer programs that emulate the knowledge and analytical skills of one or more human experts, related to a specific subject. SHINE (Spacecraft Health Inference Engine) is one such program, a software inference engine (expert system) designed by NASA for the purpose of monitoring, analyzing, and diagnosing both real-time and non-real-time systems. It was developed to meet many of the Agency s demanding and rigorous artificial intelligence goals for current and future needs. NASA developed the sophisticated and reusable software based on the experience and requirements of its Jet Propulsion Laboratory s (JPL) Artificial Intelligence Research Group in developing expert systems for space flight operations specifically, the diagnosis of spacecraft health. It was designed to be efficient enough to operate in demanding real time and in limited hardware environments, and to be utilized by non-expert systems applications written in conventional programming languages. The technology is currently used in several ongoing NASA applications, including the Mars Exploration Rovers and the Spacecraft Health Automatic Reasoning Pilot (SHARP) program for the diagnosis of telecommunication anomalies during the Neptune Voyager Encounter. It is also finding applications outside of the Space Agency.
Analyzing Complex Reaction Mechanisms Using Path Sampling.
van Erp, Titus S; Moqadam, Mahmoud; Riccardi, Enrico; Lervik, Anders
2016-11-08
We introduce an approach to analyze collective variables (CVs) regarding their predictive power for a reaction. The method is based on already available path sampling data produced by, for instance, transition interface sampling or forward flux sampling, which are path sampling methods used for efficient computation of reaction rates. By a search in CV space, a measure of predictiveness can be optimized and, in addition, the number of CVs can be reduced using projection operations which keep this measure invariant. The approach allows testing hypotheses on the reaction mechanism but could, in principle, also be used to construct the phase-space committor surfaces without the need of additional trajectory sampling. The procedure is illustrated for a one-dimensional double-well potential, a theoretical model for an ion-transfer reaction in which the solvent structure can lower the barrier, and an ab initio molecular dynamics study of water auto-ionization. The analysis technique enhances the quantitative interpretation of path sampling data which can provide clues on how chemical reactions can be steered in desired directions.
Quantum Computing: Solving Complex Problems
DiVincenzo, David [IBM Watson Research Center
2016-07-12
One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.
Analyzing the Origins of Childhood Externalizing Behavioral Problems
ERIC Educational Resources Information Center
Barnes, J. C.; Boutwell, Brian B.; Beaver, Kevin M.; Gibson, Chris L.
2013-01-01
Drawing on a sample of twin children from the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B; Snow et al., 2009), the current study analyzed 2 of the most prominent predictors of externalizing behavioral problems (EBP) in children: (a) parental use of spankings and (b) childhood self-regulation. A variety of statistical techniques were…
Analyzing the Origins of Childhood Externalizing Behavioral Problems
ERIC Educational Resources Information Center
Barnes, J. C.; Boutwell, Brian B.; Beaver, Kevin M.; Gibson, Chris L.
2013-01-01
Drawing on a sample of twin children from the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B; Snow et al., 2009), the current study analyzed 2 of the most prominent predictors of externalizing behavioral problems (EBP) in children: (a) parental use of spankings and (b) childhood self-regulation. A variety of statistical techniques were…
Quantifying and analyzing the network basis of genetic complexity.
Thompson, Ethan G; Galitski, Timothy
2012-01-01
Genotype-to-phenotype maps exhibit complexity. This genetic complexity is mentioned frequently in the literature, but a consistent and quantitative definition is lacking. Here, we derive such a definition and investigate its consequences for model genetic systems. The definition equates genetic complexity with a surplus of genotypic diversity over phenotypic diversity. Applying this definition to ensembles of Boolean network models, we found that the in-degree distribution and the number of periodic attractors produced determine the relative complexity of different topology classes. We found evidence that networks that are difficult to control, or that exhibit a hierarchical structure, are genetically complex. We analyzed the complexity of the cell cycle network of Sacchoromyces cerevisiae and pinpointed genes and interactions that are most important for its high genetic complexity. The rigorous definition of genetic complexity is a tool for unraveling the structure and properties of genotype-to-phenotype maps by enabling the quantitative comparison of the relative complexities of different genetic systems. The definition also allows the identification of specific network elements and subnetworks that have the greatest effects on genetic complexity. Moreover, it suggests ways to engineer biological systems with desired genetic properties.
Analyzing Problem's Difficulty Based on Neural Networks and Knowledge Map
ERIC Educational Resources Information Center
Kuo, Rita; Lien, Wei-Peng; Chang, Maiga; Heh, Jia-Sheng
2004-01-01
This paper proposes a methodology to calculate both the difficulty of the basic problems and the difficulty of solving a problem. The method to calculate the difficulty of problem is according to the process of constructing a problem, including Concept Selection, Unknown Designation, and Proposition Construction. Some necessary measures observed…
Analyzing Problem's Difficulty Based on Neural Networks and Knowledge Map
ERIC Educational Resources Information Center
Kuo, Rita; Lien, Wei-Peng; Chang, Maiga; Heh, Jia-Sheng
2004-01-01
This paper proposes a methodology to calculate both the difficulty of the basic problems and the difficulty of solving a problem. The method to calculate the difficulty of problem is according to the process of constructing a problem, including Concept Selection, Unknown Designation, and Proposition Construction. Some necessary measures observed…
Increasing process understanding by analyzing complex interactions in experimental data.
Naelapää, Kaisa; Allesø, Morten; Kristensen, Henning G; Bro, Rasmus; Rantanen, Jukka; Bertelsen, Poul
2009-05-01
There is a recognized need for new approaches to understand unit operations with pharmaceutical relevance. A method for analyzing complex interactions in experimental data is introduced. Higher-order interactions do exist between process parameters, which complicate the interpretation of experimental results. In this study, experiments based on mixed factorial design of coating process were performed. Drug release was analyzed by traditional analysis of variance (ANOVA) and generalized multiplicative ANOVA (GEMANOVA). GEMANOVA modeling is introduced in this study as a new tool for increased understanding of a coating process. It was possible to model the response, that is, the amount of drug released, using both mentioned techniques. However, the ANOVA model was difficult to interpret as several interactions between process parameters existed. In contrast to ANOVA, GEMANOVA is especially suited for modeling complex interactions and making easily understandable models of these. GEMANOVA modeling allowed a simple visualization of the entire experimental space. Furthermore, information was obtained on how relative changes in the settings of process parameters influence the film quality and thereby drug release.
Analyzing patterns in experts' approaches to solving experimental problems
NASA Astrophysics Data System (ADS)
Čančula, Maja Poklinek; Planinšič, Gorazd; Etkina, Eugenia
2015-04-01
We report detailed observations of three pairs of expert scientists and a pair of advanced undergraduate students solving an experimental optics problem. Using a new method ("transition graphs") of visualizing sequences of logical steps, we were able to compare the groups and identify patterns that could not be found using previously existing methods. While the problem solving of undergraduates significantly differed from that of experts at the beginning of the process, it gradually became more similar to the expert problem solving. We mapped problem solving steps and their sequence to the elements of an approach to teaching and learning physics called Investigative Science Learning Environment (ISLE), and we speculate that the ISLE educational framework closely represents the actual work of physicists.
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.
Wang, Di; Kleinberg, Robert D
2009-11-28
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.
Complex Problem Solving in a Workplace Setting.
ERIC Educational Resources Information Center
Middleton, Howard
2002-01-01
Studied complex problem solving in the hospitality industry through interviews with six office staff members and managers. Findings show it is possible to construct a taxonomy of problem types and that the most common approach can be termed "trial and error." (SLD)
Complex Problem Solving in a Workplace Setting.
ERIC Educational Resources Information Center
Middleton, Howard
2002-01-01
Studied complex problem solving in the hospitality industry through interviews with six office staff members and managers. Findings show it is possible to construct a taxonomy of problem types and that the most common approach can be termed "trial and error." (SLD)
The Process of Solving Complex Problems
ERIC Educational Resources Information Center
Fischer, Andreas; Greiff, Samuel; Funke, Joachim
2012-01-01
This article is about Complex Problem Solving (CPS), its history in a variety of research domains (e.g., human problem solving, expertise, decision making, and intelligence), a formal definition and a process theory of CPS applicable to the interdisciplinary field. CPS is portrayed as (a) knowledge acquisition and (b) knowledge application…
Special Education Provision in Nigeria: Analyzing Contexts, Problems, and Prospects
ERIC Educational Resources Information Center
Obiakor, Festus E.; Offor, MaxMary Tabugbo
2011-01-01
Nigeria has made some efforts to educate all of its citizenry, including those with disabilities. And, it has struggled to make sure that programs are available to those who need them. However, its traditional, sociocultural, and educational problems have prevented some programmatic consistency and progress. As a result, the special education…
Program for Analyzing Flows in a Complex Network
NASA Technical Reports Server (NTRS)
Majumdar, Alok Kumar
2006-01-01
Generalized Fluid System Simulation Program (GFSSP) version 4 is a general-purpose computer program for analyzing steady-state and transient flows in a complex fluid network. The program is capable of modeling compressibility, fluid transients (e.g., water hammers), phase changes, mixtures of chemical species, and such externally applied body forces as gravitational and centrifugal ones. A graphical user interface enables the user to interactively develop a simulation of a fluid network consisting of nodes and branches. The user can also run the simulation and view the results in the interface. The system of equations for conservation of mass, energy, chemical species, and momentum is solved numerically by a combination of the Newton-Raphson and successive-substitution methods.
Molecular Analyzer for Complex Refractory Organic-Rich Surfaces (MACROS)
NASA Technical Reports Server (NTRS)
Getty, Stephanie A.; Cook, Jamie E.; Balvin, Manuel; Brinckerhoff, William B.; Li, Xiang; Grubisic, Andrej; Cornish, Timothy; Ferrance, Jerome; Southard, Adrian
2017-01-01
The Molecular Analyzer for Complex Refractory Organic-rich Surfaces, MACROS, is a novel instrument package being developed at NASA Goddard Space Flight Center. MACROS enables the in situ characterization of a sample's composition by coupling two powerful techniques into one compact instrument package: (1) laser desorption/ionization time-of-flight mass spectrometry (LDMS) for broad detection of inorganic mineral composition and non-volatile organics, and (2) liquid-phase extraction methods to gently isolate the soluble organic and inorganic fraction of a planetary powder for enrichment and detailed analysis by liquid chromatographic separation coupled to LDMS. The LDMS is capable of positive and negative ion detection, precision mass selection, and fragment analysis. Two modes are included for LDMS: single laser LDMS as the broad survey mode and two step laser mass spectrometry (L2MS). The liquid-phase extraction will be done in a newly designed extraction module (EM) prototype, providing selectivity in the analysis of a complex sample. For the sample collection, a diamond drill front end will be used to collect rock/icy powder. With all these components and capabilities together, MACROS offers a versatile analytical instrument for a mission targeting an icy moon, carbonaceous asteroid, or comet, to fully characterize the surface composition and advance our understanding of the chemical inventory present on that body.
Complex Problem Solving--More than Reasoning?
ERIC Educational Resources Information Center
Wustenberg, Sascha; Greiff, Samuel; Funke, Joachim
2012-01-01
This study investigates the internal structure and construct validity of Complex Problem Solving (CPS), which is measured by a "Multiple-Item-Approach." It is tested, if (a) three facets of CPS--"rule identification" (adequateness of strategies), "rule knowledge" (generated knowledge) and "rule application"…
Complex Problem Solving--More than Reasoning?
ERIC Educational Resources Information Center
Wustenberg, Sascha; Greiff, Samuel; Funke, Joachim
2012-01-01
This study investigates the internal structure and construct validity of Complex Problem Solving (CPS), which is measured by a "Multiple-Item-Approach." It is tested, if (a) three facets of CPS--"rule identification" (adequateness of strategies), "rule knowledge" (generated knowledge) and "rule application"…
Refined scale-dependent permutation entropy to analyze systems complexity
NASA Astrophysics Data System (ADS)
Wu, Shuen-De; Wu, Chiu-Wen; Humeau-Heurtier, Anne
2016-05-01
Multiscale entropy (MSE) has become a prevailing method to quantify the complexity of systems. Unfortunately, MSE has a temporal complexity in O(N2) , which is unrealistic for long time series. Moreover, MSE relies on the sample entropy computation which is length-dependent and which leads to large variance and possible undefined entropy values for short time series. Here, we propose and introduce a new multiscale complexity measure, the refined scale-dependent permutation entropy (RSDPE). Through the processing of different kinds of synthetic data and real signals, we show that RSDPE has a behavior close to the one of MSE. Furthermore, RSDPE has a temporal complexity in O(N) . Finally, RSDPE has the advantage of being much less length-dependent than MSE. From all this, we conclude that RSDPE over-performs MSE in terms of computational cost and computational accuracy.
Complex collaborative problem-solving processes in mission control.
Fiore, Stephen M; Wiltshire, Travis J; Oglesby, James M; O'Keefe, William S; Salas, Eduardo
2014-04-01
NASA's Mission Control Center (MCC) is responsible for control of the International Space Station (ISS), which includes responding to problems that obstruct the functioning of the ISS and that may pose a threat to the health and well-being of the flight crew. These problems are often complex, requiring individuals, teams, and multiteam systems, to work collaboratively. Research is warranted to examine individual and collaborative problem-solving processes in this context. Specifically, focus is placed on how Mission Control personnel-each with their own skills and responsibilities-exchange information to gain a shared understanding of the problem. The Macrocognition in Teams Model describes the processes that individuals and teams undertake in order to solve problems and may be applicable to Mission Control teams. Semistructured interviews centering on a recent complex problem were conducted with seven MCC professionals. In order to assess collaborative problem-solving processes in MCC with those predicted by the Macrocognition in Teams Model, a coding scheme was developed to analyze the interview transcriptions. Findings are supported with excerpts from participant transcriptions and suggest that team knowledge-building processes accounted for approximately 50% of all coded data and are essential for successful collaborative problem solving in mission control. Support for the internalized and externalized team knowledge was also found (19% and 20%, respectively). The Macrocognition in Teams Model was shown to be a useful depiction of collaborative problem solving in mission control and further research with this as a guiding framework is warranted.
Harnessing Complexity: A Framework for Analyzing School Reform
ERIC Educational Resources Information Center
Knight, Stephanie L.; Erlandson, David A.
2003-01-01
The purpose of this article is to discuss an approach to analyzing the interactive effects of multiple reform efforts within a school or school district. The approach resulted from an intensive study of reform in one district in a state characterized by a strong accountability system. First, the authors present a brief history of educational…
New Approach to Analyzing Physics Problems: A Taxonomy of Introductory Physics Problems
ERIC Educational Resources Information Center
Teodorescu, Raluca E.; Bennhold, Cornelius; Feldman, Gerald; Medsker, Larry
2013-01-01
This paper describes research on a classification of physics problems in the context of introductory physics courses. This classification, called the Taxonomy of Introductory Physics Problems (TIPP), relates physics problems to the cognitive processes required to solve them. TIPP was created in order to design educational objectives, to develop…
New Approach to Analyzing Physics Problems: A Taxonomy of Introductory Physics Problems
ERIC Educational Resources Information Center
Teodorescu, Raluca E.; Bennhold, Cornelius; Feldman, Gerald; Medsker, Larry
2013-01-01
This paper describes research on a classification of physics problems in the context of introductory physics courses. This classification, called the Taxonomy of Introductory Physics Problems (TIPP), relates physics problems to the cognitive processes required to solve them. TIPP was created in order to design educational objectives, to develop…
Fractal applications to complex crustal problems
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1989-01-01
Complex scale-invariant problems obey fractal statistics. The basic definition of a fractal distribution is that the number of objects with a characteristic linear dimension greater than r satisfies the relation N = about r exp -D where D is the fractal dimension. Fragmentation often satisfies this relation. The distribution of earthquakes satisfies this relation. The classic relationship between the length of a rocky coast line and the step length can be derived from this relation. Power law relations for spectra can also be related to fractal dimensions. Topography and gravity are examples. Spectral techniques can be used to obtain maps of fractal dimension and roughness amplitude. These provide a quantitative measure of texture analysis. It is argued that the distribution of stress and strength in a complex crustal region, such as the Alps, is fractal. Based on this assumption, the observed frequency-magnitude relation for the seismicity in the region can be derived.
New approach to analyzing physics problems: A Taxonomy of Introductory Physics Problems
NASA Astrophysics Data System (ADS)
Teodorescu, Raluca E.; Bennhold, Cornelius; Feldman, Gerald; Medsker, Larry
2013-06-01
This paper describes research on a classification of physics problems in the context of introductory physics courses. This classification, called the Taxonomy of Introductory Physics Problems (TIPP), relates physics problems to the cognitive processes required to solve them. TIPP was created in order to design educational objectives, to develop assessments that can evaluate individual component processes of the physics problem-solving process, and to guide curriculum design in introductory physics courses, specifically within the context of a “thinking-skills” curriculum. Moreover, TIPP enables future physics education researchers to investigate to what extent the cognitive processes presented in various taxonomies of educational objectives are exercised during physics problem solving and what relationship might exist between such processes. We describe the taxonomy, give examples of classifications of physics problems, and discuss the validity and reliability of this tool.
System and method for modeling and analyzing complex scenarios
Shevitz, Daniel Wolf
2013-04-09
An embodiment of the present invention includes a method for analyzing and solving possibility tree. A possibility tree having a plurality of programmable nodes is constructed and solved with a solver module executed by a processor element. The solver module executes the programming of said nodes, and tracks the state of at least a variable through a branch. When a variable of said branch is out of tolerance with a parameter, the solver disables remaining nodes of the branch and marks the branch as an invalid solution. The valid solutions are then aggregated and displayed as valid tree solutions.
Complex energies and the polyelectronic Stark problem
NASA Astrophysics Data System (ADS)
Themelis, Spyros I.; Nicolaides, Cleanthes A.
2000-12-01
The problem of computing the energy shifts and widths of ground or excited N-electron atomic states perturbed by weak or strong static electric fields is dealt with by formulating a state-specific complex eigenvalue Schrödinger equation (CESE), where the complex energy contains the field-induced shift and width. The CESE is solved to all orders nonperturbatively, by using separately optimized N-electron function spaces, composed of real and complex one-electron functions, the latter being functions of a complex coordinate. The use of such spaces is a salient characteristic of the theory, leading to economy and manageability of calculation in terms of a two-step computational procedure. The first step involves only Hermitian matrices. The second adds complex functions and the overall computation becomes non-Hermitian. Aspects of the formalism and of computational strategy are compared with those of the complex absorption potential (CAP) method, which was recently applied for the calculation of field-induced complex energies in H and Li. Also compared are the numerical results of the two methods, and the questions of accuracy and convergence that were posed by Sahoo and Ho (Sahoo S and Ho Y K 2000 J. Phys. B: At. Mol. Opt. Phys. 33 2195) are explored further. We draw attention to the fact that, because in the region where the field strength is weak the tunnelling rate (imaginary part of the complex eigenvalue) diminishes exponentially, it is possible for even large-scale nonperturbative complex eigenvalue calculations either to fail completely or to produce seemingly stable results which, however, are wrong. It is in this context that the discrepancy in the width of Li 1s22s 2S between results obtained by the CAP method and those obtained by the CESE method is interpreted. We suggest that the very-weak-field regime must be computed by the golden rule, provided the continuum is represented accurately. In this respect, existing one-particle semiclassical formulae seem
Network-Thinking: Graphs to Analyze Microbial Complexity and Evolution
Corel, Eduardo; Lopez, Philippe; Méheust, Raphaël; Bapteste, Eric
2016-01-01
The tree model and tree-based methods have played a major, fruitful role in evolutionary studies. However, with the increasing realization of the quantitative and qualitative importance of reticulate evolutionary processes, affecting all levels of biological organization, complementary network-based models and methods are now flourishing, inviting evolutionary biology to experience a network-thinking era. We show how relatively recent comers in this field of study, that is, sequence-similarity networks, genome networks, and gene families–genomes bipartite graphs, already allow for a significantly enhanced usage of molecular datasets in comparative studies. Analyses of these networks provide tools for tackling a multitude of complex phenomena, including the evolution of gene transfer, composite genes and genomes, evolutionary transitions, and holobionts. PMID:26774999
Network-Thinking: Graphs to Analyze Microbial Complexity and Evolution.
Corel, Eduardo; Lopez, Philippe; Méheust, Raphaël; Bapteste, Eric
2016-03-01
The tree model and tree-based methods have played a major, fruitful role in evolutionary studies. However, with the increasing realization of the quantitative and qualitative importance of reticulate evolutionary processes, affecting all levels of biological organization, complementary network-based models and methods are now flourishing, inviting evolutionary biology to experience a network-thinking era. We show how relatively recent comers in this field of study, that is, sequence-similarity networks, genome networks, and gene families-genomes bipartite graphs, already allow for a significantly enhanced usage of molecular datasets in comparative studies. Analyses of these networks provide tools for tackling a multitude of complex phenomena, including the evolution of gene transfer, composite genes and genomes, evolutionary transitions, and holobionts. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
MatOFF: A Tool For Analyzing Behaviorally-Complex Neurophysiological Experiments
Genovesio, Aldo; Mitz, Andrew R.
2007-01-01
The simple operant conditioning originally used in behavioral neurophysiology 30 years ago has given way to complex and sophisticated behavioral paradigms; so much so, that early general purpose programs for analyzing neurophysiological data are ill-suited for complex experiments. The trend has been to develop custom software for each class of experiment, but custom software can have serious drawbacks. We describe here a general purpose software tool for behavioral and electrophysiological studies, called MatOFF, that is especially suited for processing neurophysiological data gathered during the execution of complex behaviors. Written in the MATLAB programming language, MatOFF solves the problem of handling complex analysis requirements in a unique and powerful way. While other neurophysiological programs are either a loose collection of tools or append MATLAB as a post-processing step, MatOFF is an integrated environment that supports MATLAB scripting within the event search engine safely isolated in programming sandbox. The results from scripting are stored separately, but in parallel with the raw data, and thus available to all subsequent MatOFF analysis and display processing. An example from a recently published experiment shows how all the features of MatOFF work together to analyze complex experiments and mine neurophysiological data in efficient ways. PMID:17604115
Analyzing complex gaze behavior in the natural world
NASA Astrophysics Data System (ADS)
Pelz, Jeff B.; Kinsman, Thomas B.; Evans, Karen M.
2011-03-01
The history of eye-movement research extends back at least to 1794, when Erasmus Darwin (Charles' grandfather) published Zoonomia, including descriptions of eye movements due to self-motion. But research on eye movements was restricted to the laboratory for 200 years, until Michael Land built the first wearable eyetracker at the University of Sussex and published the seminal paper "Where we look when we steer" [1]. In the intervening centuries, we learned a tremendous amount about the mechanics of the oculomotor system and how it responds to isolated stimuli, but virtually nothing about how we actually use our eyes to explore, gather information, navigate, and communicate in the real world. Inspired by Land's work, we have been working to extend knowledge in these areas by developing hardware, algorithms, and software that have allowed researchers to ask questions about how we actually use vision in the real world. Central to that effort are new methods for analyzing the volumes of data that come from the experiments made possible by the new systems. We describe a number of recent experiments and SemantiCode, a new program that supports assisted coding of eye-movement data collected in unrestricted environments.
Pyridylamination as a means of analyzing complex sugar chains
Hase, Sumihiro
2010-01-01
Herein, I describe pyridylamination for versatile analysis of sugar chains. The reducing ends of the sugar chains are tagged with 2-aminopyridine and the resultant chemically stable fluorescent derivatives are used for structural/functional analysis. Pyridylamination is an effective “operating system” for increasing sensitivity and simplifying the analytical procedures including mass spectrometry and NMR. Excellent separation of isomers is achieved by reversed-phase HPLC. However, separation is further improved by two-dimensional HPLC, which involves a combination of reversed-phase HPLC and size-fractionation HPLC. Moreover, a two-dimensional HPLC map is also useful for structural analysis. I describe a simple procedure for preparing homogeneous pyridylamino sugar chains that is less laborious than existing techniques and can be used for functional analysis (e.g., sugar-protein interaction). This novel approach was applied and some of the results are described: i) a glucosyl-serine type sugar chain found in blood coagulation factors; ii) discovery of endo-β-mannosidase (EC 3.2.1.152) and a new type plant α1,2-l-fucosidase; and iii) novel substrate specificity of a cytosolic α-mannosidase. Moreover, using homogeneous sugar chains of a size similar to in vivo substrates we were able to analyze interactions between sugar chains and proteins such as enzymes and lectins in detail. Interestingly, our studies reveal that some enzymes recognize a wider region of the substrate than anticipated. PMID:20431262
Estimating uncertainties in complex joint inverse problems
NASA Astrophysics Data System (ADS)
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related
Hybrid techniques for complex aerospace electromagnetics problems
NASA Technical Reports Server (NTRS)
Aberle, Jim
1993-01-01
Important aerospace electromagnetics problems include the evaluation of antenna performance on aircraft and the prediction and control of the aircraft's electromagnetic signature. Due to the ever increasing complexity and expense of aircraft design, aerospace engineers have become increasingly dependent on computer solutions. Traditionally, computational electromagnetics (CEM) has relied primarily on four disparate techniques: the method of moments (MoM), the finite-difference time-domain (FDTD) technique, the finite element method (FEM), and high frequency asymptotic techniques (HFAT) such as ray tracing. Each of these techniques has distinct advantages and disadvantages, and no single technique is capable of accurately solving all problems of interest on computers that are available now or will be available in the foreseeable future. As a result, new approaches that overcome the deficiencies of traditional techniques are beginning to attract a great deal of interest in the CEM community. Among these new approaches are hybrid methods which combine two or more of these techniques into a coherent model. During the ASEE Summer Faculty Fellowship Program a hybrid FEM/MoM computer code was developed and applied to a geometry containing features found on many modern aircraft.
The Guarding Problem - Complexity and Approximation
NASA Astrophysics Data System (ADS)
Reddy, T. V. Thirumala; Krishna, D. Sai; Rangan, C. Pandu
Let G = (V, E) be the given graph and G R = (V R ,E R ) and G C = (V C ,E C ) be the sub graphs of G such that V R ∩ V C = ∅ and V R ∪ V C = V. G C is referred to as the cops region and G R is called as the robber region. Initially a robber is placed at some vertex of V R and the cops are placed at some vertices of V C . The robber and cops may move from their current vertices to one of their neighbours. While a cop can move only within the cops region, the robber may move to any neighbour. The robber and cops move alternatively. A vertex v ∈ V C is said to be attacked if the current turn is the robber's turn, the robber is at vertex u where u ∈ V R , (u,v) ∈ E and no cop is present at v. The guarding problem is to find the minimum number of cops required to guard the graph G C from the robber's attack. We first prove that the decision version of this problem when G R is an arbitrary undirected graph is PSPACE-hard. We also prove that the complexity of the decision version of the guarding problem when G R is a wheel graph is NP-hard. We then present approximation algorithms if G R is a star graph, a clique and a wheel graph with approximation ratios H(n 1), 2 H(n 1) and left( H(n1) + 3/2 right) respectively, where H(n1) = 1 + 1/2 + ... + 1/n1 and n 1 = ∣ V R ∣.
NASA Technical Reports Server (NTRS)
Jackson, C. E., Jr.
1977-01-01
A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.
Decomposing a complex design problem using CLIPS
NASA Technical Reports Server (NTRS)
Rogers, James L.
1990-01-01
Many engineering systems are large and multidisciplinary. Before the design of such complex systems can begin, much time and money are invested in determining the possible couplings among the participating subsystems and their parts. For designs based on existing concepts, like commercial aircraft design, the subsystems and their couplings are usually well-established. However, for designs based on novel concepts, like large space platforms, the determination of the subsystems, couplings, and participating disciplines is an important task. Moreover, this task must be repeated as new information becomes available or as the design specifications change. Determining the subsystems is not an easy, straightforward process and often important couplings are overlooked. The design manager must know how to divide the design work among the design teams so that changes in one subsystem will have predictable effects on other subsystems. The resulting subsystems must be ordered into a hierarchical structure before the planning documents and milestones of the design project are set. The success of a design project often depends on the wise choice of design variables, constraints, objective functions, and the partitioning of these among the design teams. Very few tools are available to aid the design manager in determining the hierarchical structure of a design problem and assist in making these decisions.
Problems and solutions in analyzing partial-reflection drift data by correlation techniques
NASA Technical Reports Server (NTRS)
Meek, C. E.
1984-01-01
Solutions in analyzing partial reflection drift data by correlation techniques are discussed. The problem of analyzing spaced antenna drift data breaks down into the general categories of raw data collection and storage, correlation calculation, interpretation of correlations, location of time lags for peak correlation, and velocity calculation.
Complex network problems in physics, computer science and biology
NASA Astrophysics Data System (ADS)
Cojocaru, Radu Ionut
There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe
Toward Modeling the Intrinsic Complexity of Test Problems
ERIC Educational Resources Information Center
Shoufan, Abdulhadi
2017-01-01
The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…
Team-Based Complex Problem Solving: A Collective Cognition Perspective
ERIC Educational Resources Information Center
Hung, Woei
2013-01-01
Today, much problem solving is performed by teams, rather than individuals. The complexity of these problems has exceeded the cognitive capacity of any individual and requires a team of members to solve them. The success of solving these complex problems not only relies on individual team members who possess different but complementary expertise,…
Team-Based Complex Problem Solving: A Collective Cognition Perspective
ERIC Educational Resources Information Center
Hung, Woei
2013-01-01
Today, much problem solving is performed by teams, rather than individuals. The complexity of these problems has exceeded the cognitive capacity of any individual and requires a team of members to solve them. The success of solving these complex problems not only relies on individual team members who possess different but complementary expertise,…
Zhang, Songchuan; Xia, Youshen
2016-12-28
Much research has been devoted to complex-variable optimization problems due to their engineering applications. However, the complex-valued optimization method for solving complex-variable optimization problems is still an active research area. This paper proposes two efficient complex-valued optimization methods for solving constrained nonlinear optimization problems of real functions in complex variables, respectively. One solves the complex-valued nonlinear programming problem with linear equality constraints. Another solves the complex-valued nonlinear programming problem with both linear equality constraints and an ℓ₁-norm constraint. Theoretically, we prove the global convergence of the proposed two complex-valued optimization algorithms under mild conditions. The proposed two algorithms can solve the complex-valued optimization problem completely in the complex domain and significantly extend existing complex-valued optimization algorithms. Numerical results further show that the proposed two algorithms have a faster speed than several conventional real-valued optimization algorithms.
Managing Complex Problems in Rangeland Ecosystems
USDA-ARS?s Scientific Manuscript database
Management of rangelands, and natural resources in general, has become increasingly complex. There is an atmosphere of increasing expectations for conservation efforts associated with a variety of issues from water quality to endangered species. We argue that many current issues are complex by their...
Complex partial status epilepticus: a recurrent problem.
Cockerell, O C; Walker, M C; Sander, J W; Shorvon, S D
1994-01-01
Twenty patients with complex partial status epilepticus were identified retrospectively from a specialist neurology hospital. Seventeen patients experienced recurrent episodes of complex partial status epilepticus, often occurring at regular intervals, usually over many years, and while being treated with effective anti-epileptic drugs. No unifying cause for the recurrences, and no common epilepsy aetiologies, were identified. In spite of the frequency of recurrence and length of history, none of the patients showed any marked evidence of cognitive or neurological deterioration. Complex partial status epilepticus is more common than is generally recognised, should be differentiated from other forms of non-convulsive status, and is often difficult to treat. PMID:8021671
Organizational Structure and Complex Problem Solving
ERIC Educational Resources Information Center
Becker, Selwyn W.; Baloff, Nicholas
1969-01-01
The problem-solving efficiency of different organization structures is discussed in relation to task requirements and the appropriate organizational behavior, to group adaptation to a task over time, and to various group characteristics. (LN)
Solving the Inverse-Square Problem with Complex Variables
ERIC Educational Resources Information Center
Gauthier, N.
2005-01-01
The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…
Solving the Inverse-Square Problem with Complex Variables
ERIC Educational Resources Information Center
Gauthier, N.
2005-01-01
The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…
How Unstable Are Complex Financial Systems? Analyzing an Inter-bank Network of Credit Relations
NASA Astrophysics Data System (ADS)
Sinha, Sitabhra; Thess, Maximilian; Markose, Sheri
The recent worldwide economic crisis of 2007-09 has focused attention on the need to analyze systemic risk in complex financial networks. We investigate the problem of robustness of such systems in the context of the general theory of dynamical stability in complex networks and, in particular, how the topology of connections influence the risk of the failure of a single institution triggering a cascade of successive collapses propagating through the network. We use data on bilateral liabilities (or exposure) in the derivatives market between 202 financial intermediaries based in USA and Europe in the last quarter of 2009 to empirically investigate the network structure of the over-the-counter (OTC) derivatives market. We observe that the network exhibits both heterogeneity in node properties and the existence of communities. It also has a prominent core-periphery organization and can resist large-scale collapse when subjected to individual bank defaults (however, failure of any bank in the core may result in localized collapse of the innermost core with substantial loss of capital) but is vulnerable to system-wide breakdown as a result of an accompanying liquidity crisis.
Analyzing the Solution of Word Problems in Mathematics: An Exploratory Study.
ERIC Educational Resources Information Center
Kilpatrick, Jeremy
This study attempted to develop a system for analyzing the processes students use in solving work problems and to investigate the relationships of these processes to other behavioral measures. The subjects in this study were 56 students of both sexes who had above average mental ability and who had just completed the eighth grade from two junior…
ERIC Educational Resources Information Center
Lee, Young-Jin
2015-01-01
This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…
ERIC Educational Resources Information Center
Lee, Young-Jin
2015-01-01
This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…
Analyzing the Responses of 7-8 Year Olds When Solving Partitioning Problems
ERIC Educational Resources Information Center
Badillo, Edelmira; Font, Vicenç; Edo, Mequè
2015-01-01
We analyze the mathematical solutions of 7- to 8-year-old pupils while individually solving an arithmetic problem. The analysis was based on the "configuration of objects," an instrument derived from the onto-semiotic approach to mathematical knowledge. Results are illustrated through a number of cases. From the analysis of mathematical…
Analyzing the Responses of 7-8 Year Olds When Solving Partitioning Problems
ERIC Educational Resources Information Center
Badillo, Edelmira; Font, Vicenç; Edo, Mequè
2015-01-01
We analyze the mathematical solutions of 7- to 8-year-old pupils while individually solving an arithmetic problem. The analysis was based on the "configuration of objects," an instrument derived from the onto-semiotic approach to mathematical knowledge. Results are illustrated through a number of cases. From the analysis of mathematical…
Analyzing HIV/AIDS and Alcohol and Other Drug Use as a Social Problem
PATTERSON, DAVID A.; Wolf (Adelv unegv Waya), Silver
2012-01-01
Most prevention and intervention activities directed toward HIV/AIDS and alcohol and other drug use separately as well as the combining of the two (e.g., those who are both HIV/AIDS and using alcohol and other drugs) comes in the form of specific, individualized therapies without consideration of social influences that may have a greater impact on this population. Approaching this social problem from the narrowed view of individualized, mi-cro solutions disregards the larger social conditions that affect or perhaps even are at the root of the problem. This paper analyzes the social problem of HIV/AIDS and alcohol and other drug abuse using three sociological perspectives—social construction theory, ethnomethodology, and conflict theory—informing the reader of the broader influences accompanying this problem. PMID:23264724
Explicitly solvable complex Chebyshev approximation problems related to sine polynomials
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.
Multigrid Methods for Aerodynamic Problems in Complex Geometries
NASA Technical Reports Server (NTRS)
Caughey, David A.
1995-01-01
Work has been directed at the development of efficient multigrid methods for the solution of aerodynamic problems involving complex geometries, including the development of computational methods for the solution of both inviscid and viscous transonic flow problems. The emphasis is on problems of complex, three-dimensional geometry. The methods developed are based upon finite-volume approximations to both the Euler and the Reynolds-Averaged Navier-Stokes equations. The methods are developed for use on multi-block grids using diagonalized implicit multigrid methods to achieve computational efficiency. The work is focused upon aerodynamic problems involving complex geometries, including advanced engine inlets.
A New Approach to Analyzing the Cognitive Load in Physics Problems
NASA Astrophysics Data System (ADS)
Teodorescu, Raluca
2010-02-01
I will present a Taxonomy of Introductory Physics Problems (TIPP), which relates physics problems to the cognitive processes and the knowledge required to solve them. TIPP was created for designing and clarifying educational objectives, for developing assessments to evaluate components of the problem-solving process, and for guiding curriculum design in introductory physics courses. To construct TIPP, I considered processes that have been identified either by cognitive science and expert-novice research or by direct observation of students' behavior while solving physics problems. Based on Marzano and Kendall's taxonomy [1], I developed a procedure to classify physics problems according to the cognitive processes that they involve and the knowledge to which they refer. The procedure is applicable to any physics problem and its validity and reliability have been confirmed. This algorithm was then used to build TIPP, which is a database that contains text-based and research-based physics problems and explains their relationship to cognitive processes and knowledge. TIPP has been used in the years 2006--2009 to reform the first semester of the introductory algebra-based physics course at The George Washington University. The reform targeted students' cognitive development and attitudes improvement. The methodology employed in the course involves exposing students to certain types of problems in a variety of contexts with increasing complexity. To assess the effectiveness of our approach, rubrics were created to evaluate students' problem-solving abilities and the Colorado Learning Attitudes about Science Survey (CLASS) was administered pre- and post-instruction to determine students' shift in dispositions towards learning physics. Our results show definitive gains in the areas targeted by our curricular reform.[4pt] [1] R.J. Marzano and J.S. Kendall, The New Taxonomy of Educational Objectives, 2^nd Ed., (Corwin Press, Thousand Oaks, 2007). )
Complex Mathematical Problem Solving by Individuals and Dyads.
ERIC Educational Resources Information Center
Vye, Nancy J.; Goldman, Susan R.; Voss, James F.; Hmelo, Cindy; Williams, Susan; Cognition and Technology Group at Vanderbilt University
1997-01-01
Describes two studies of mathematical problem solving using an episode from "The Adventures of Jasper Woodbury," a set of curriculum materials that afford complex problem-solving opportunities. Discussion focuses on characteristics of problems that make solutions difficult, kinds of reasoning that dyadic interactions support, and…
Complex Mathematical Problem Solving by Individuals and Dyads.
ERIC Educational Resources Information Center
Vye, Nancy J.; Goldman, Susan R.; Voss, James F.; Hmelo, Cindy; Williams, Susan; Cognition and Technology Group at Vanderbilt University
1997-01-01
Describes two studies of mathematical problem solving using an episode from "The Adventures of Jasper Woodbury," a set of curriculum materials that afford complex problem-solving opportunities. Discussion focuses on characteristics of problems that make solutions difficult, kinds of reasoning that dyadic interactions support, and…
[Simplification of on-line connection of analyzers and standardization problems].
Kataoka, H; Sasaki, M; Kageoka, T; Imamura, J; Nishida, M; Ichihara, K
1997-06-01
On-line connection of automated analyzers to laboratory information system (LIS) reduces mistakes in inputing data for each samples. It also makes reporting faster in clinical laboratory. Moreover, connection of these instruments with sample transporting system enables analyses without touching samples directly. It, however, costs extremely high to construct such a system. It is because every automated analyzer uses different connecting protocol, so that we have to make a different program for each machine. For solving this problem, we have to make a standard for connecting protocol. It is very difficult to make a standard protocol fitting on all of the analyzers, considering its cost and other things. Furthermore, it must take a few decades to spreading the standard through the end users. ICCLS and NCCLS has been taking a central role for this problem since 1996, with a five-year plan to make an international standard. Until the standard will be laid, each clinical laboratories have to pay high costs to construct their systems, connecting different manufacturers' analyzers each other. Thus, we have developed a novel system which enables us to construct a laboratory automation system in a shorter time. For realizing this new system, we have reduced the number of connecting protocols for analyzers. Moreover, we have corrected the flow of laboratory works in order. Each programs are put together into a system as parts, or modules. This software system is now in operation in the clinical laboratory of Kochi Medical School. In this report, we describe the construction of this novel software system, and the effect obtained by using this system. Furthermore, we would show problems and things to be improved for making international standards for communication protocols between host and analyzing instruments.
Preparing for Complexity and Wicked Problems through Transformational Learning Approaches
ERIC Educational Resources Information Center
Yukawa, Joyce
2015-01-01
As the information environment becomes increasingly complex and challenging, Library and Information Studies (LIS) education is called upon to nurture innovative leaders capable of managing complex situations and "wicked problems." While disciplinary expertise remains essential, higher levels of mental complexity and adaptive…
Preparing for Complexity and Wicked Problems through Transformational Learning Approaches
ERIC Educational Resources Information Center
Yukawa, Joyce
2015-01-01
As the information environment becomes increasingly complex and challenging, Library and Information Studies (LIS) education is called upon to nurture innovative leaders capable of managing complex situations and "wicked problems." While disciplinary expertise remains essential, higher levels of mental complexity and adaptive…
Completed Beltrami-Michell formulation for analyzing mixed boundary value problems in elasticity
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Kaljevic, Igor; Hopkins, Dale A.; Saigal, Sunil
1995-01-01
In elasticity, the method of forces, wherein stress parameters are considered as the primary unknowns, is known as the Beltrami-Michell formulation (BMF). The existing BMF can only solve stress boundary value problems; it cannot handle the more prevalent displacement of mixed boundary value problems of elasticity. Therefore, this formulation, which has restricted application, could not become a true alternative to the Navier's displacement method, which can solve all three types of boundary value problems. The restrictions in the BMF have been alleviated by augmenting the classical formulation with a novel set of conditions identified as the boundary compatibility conditions. This new method, which completes the classical force formulation, has been termed the completed Beltrami-Michell formulation (CBMF). The CBMF can solve general elasticity problems with stress, displacement, and mixed boundary conditions in terms of stresses as the primary unknowns. The CBMF is derived from the stationary condition of the variational functional of the integrated force method. In the CBMF, stresses for kinematically stable structures can be obtained without any reference to the displacements either in the field or on the boundary. This paper presents the CBMF and its derivation from the variational functional of the integrated force method. Several examples are presented to demonstrate the applicability of the completed formulation for analyzing mixed boundary value problems under thermomechanical loads. Selected example problems include a cylindrical shell wherein membrane and bending responses are coupled, and a composite circular plate.
NASA Technical Reports Server (NTRS)
Lee, H. P.
1977-01-01
The NASTRAN Thermal Analyzer Manual describes the fundamental and theoretical treatment of the finite element method, with emphasis on the derivations of the constituent matrices of different elements and solution algorithms. Necessary information and data relating to the practical applications of engineering modeling are included.
Modeling Complex Chemical Systems: Problems and Solutions
NASA Astrophysics Data System (ADS)
van Dijk, Jan
2016-09-01
Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.
Florens, Laurence; Carozza, Michael J.; Swanson, Selene K; Fournier, Marjorie; Coleman, Michael K.; Workman, Jerry L.; Washburn, Michael P.
2006-01-01
Mass spectrometry based approaches are commonly used to identify proteins from multiprotein complexes, typically with the goal of identifying new complex members or identifying post translational modifications. However, with the recent demonstration that spectral counting is a powerful quantitative proteomic approach, the analysis of multiprotein complexes by mass spectrometry can be reconsidered in certain cases. Using the chromatography based approach named multidimensional protein identification technology, multiprotein complexes may be analyzed quantitatively using the normalized spectral abundance factor that allows comparison of multiple independent analyses of samples. This study describes an approach to visualize multiprotein complex datasets that provides structure function information that is superior to tabular lists of data. In this method review, we describe a reanalysis of the Rpd3/Sin3 small and large histone deacetylase complexes previously described in a tabular form to demonstrate the normalized spectral abundance factor approach. PMID:17101441
Inverse problem in archeological magnetic surveys using complex wavelet transform.
NASA Astrophysics Data System (ADS)
Saracco, G.; Moreau, F.; Mathe, P. E.; Hermitte, D.
2003-04-01
The wavelet transform applied to potential fields (electric, magnetic, or gravimetric, ...) has been now used from several years in geophysical applications, in particular to define the depth of potentiel sources verifying Poisson equation and responsible for potential anomalies measured at the ground surface. The complex continuous wavelet transform (CCWT) has been described, but the phase has not yet been exploited. (For these kinds of problem we construct a complex analyzing wavelet by Hilbert transforms of the Poisson or derivative of the Poisson wavelet which is real by definition). We show, here, that the phase of the CCWT provides useful information on the geometric and total magnetic inclination of the potential sources, as the modulus allows to characterize their depth and heterogenety degree. Regarding the properties of the phase compared to the modulus, it is more stable in presence of noise and we can defined it, independantly of the low level of energy of the signal. In this sense, information carried by the phase is more efficient to detect small objects or to separate close sources. We have applied a multi-scale analysis on magnetic measurements providing from a cesium magnetometer on the Fox-Amphoux site (France), to detect and localize buried structures like antik ovens. Conjointly, a rock magnetic study including susceptibility and magnetisations (induced or remanent) measurements give a better constrain on the magnetic parameters we want to extract.
DNA computing, computation complexity and problem of biological evolution rate.
Melkikh, Alexey V
2008-12-01
An analogy between the evolution of organisms and some complex computational problems (cryptosystem cracking, determination of the shortest path in a graph) is considered. It is shown that in the absence of a priori information about possible species of organisms such a problem is complex (is rated in the class NP) and cannot be solved in a polynomial number of steps. This conclusion suggests the need for re-examination of evolution mechanisms. Ideas of a deterministic approach to the evolution are discussed.
A Formal Approach to Analyzing Interference Problems in Aspect-Oriented Designs
NASA Astrophysics Data System (ADS)
Chen, Xin; Ye, Nan; Ding, Wenxu
Interference problems in aspect-oriented designs refer to the undesired interference between aspects and base programs that can lead to the emergence of unexpected behaviors, which do harm to the correctness of the entire system. We present a rigorous approach to analyzing the interference problems in aspect-oriented designs. Formal representations of classes and aspects are defined in terms of designs in UTP, while the weaving techniques in AOP are interpreted as the compositions of corresponding formal models. Conflicts between an aspect and base programs as well as between two aspects can be detected by calculating the weakest preconditions. Furthermore, the calculation also provides informative guidelines on how to solve the conflicts it found. Early detecting and removing conflicts in aspect-oriented design models can improve their qualities and save plenty of costs.
Solution of a Complex Least Squares Problem with Constrained Phase.
Bydder, Mark
2010-12-30
The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.
Kirschke, Sabrina; Newig, Jens; Völker, Jeanette; Borchardt, Dietrich
2017-07-01
Problem complexity is often assumed to hamper effective environmental policy delivery. However, this claim is hardly substantiated, given the dominance of qualitative small-n designs in environmental governance research. We studied 37 types of contemporary problems defined by German water governance to assess the impact of problem complexity on policy delivery through public authorities. The analysis is based on a unique data set related to these problems, encompassing both in-depth interview-based data on complexities and independent official data on policy delivery. Our findings show that complexity in fact tends to delay implementation at the stage of planning. However, different dimensions of complexity (goals, variables, dynamics, interconnections, and uncertainty) impact on the different stages of policy delivery (goal formulation, stages and degrees of implementation) in various ways. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dyspareunia: a complex problem requiring a selective approach.
Walid, Mohammad Sami; Heaton, Richard L
2009-09-01
Dyspareunia frequently has a multifactorial aetiology. The problem with the term is that it is not specific enough and does not allow for proper discussion of the very important problem of pain with sexual intercourse, a problem that can be very disturbing to a couple's relationship. We present two cases of patients who had multiple potential anatomic reasons for dyspareunia. The clinical picture, treatment strategy and the complex nature of deep penetration pain was discussed. We also proposed a new way of defining dyspareunia to allow a more adequate way of studying and discussing the problem.
From problem solving to problem definition: scrutinizing the complex nature of clinical practice.
Cristancho, Sayra; Lingard, Lorelei; Regehr, Glenn
2017-02-01
In medical education, we have tended to present problems as being singular, stable, and solvable. Problem solving has, therefore, drawn much of medical education researchers' attention. This focus has been important but it is limited in terms of preparing clinicians to deal with the complexity of the 21st century healthcare system in which they will provide team-based care for patients with complex medical illness. In this paper, we use the Soft Systems Engineering principles to introduce the idea that in complex, team-based situations, problems usually involve divergent views and evolve with multiple solution iterations. As such we need to shift the conversation from (1) problem solving to problem definition, and (2) from a problem definition derived exclusively at the level of the individual to a definition derived at the level of the situation in which the problem is manifested. Embracing such a focus on problem definition will enable us to advocate for novel educational practices that will equip trainees to effectively manage the problems they will encounter in complex, team-based healthcare.
Semantic Annotation of Complex Text Structures in Problem Reports
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Throop, David R.; Fleming, Land D.
2011-01-01
Text analysis is important for effective information retrieval from databases where the critical information is embedded in text fields. Aerospace safety depends on effective retrieval of relevant and related problem reports for the purpose of trend analysis. The complex text syntax in problem descriptions has limited statistical text mining of problem reports. The presentation describes an intelligent tagging approach that applies syntactic and then semantic analysis to overcome this problem. The tags identify types of problems and equipment that are embedded in the text descriptions. The power of these tags is illustrated in a faceted searching and browsing interface for problem report trending that combines automatically generated tags with database code fields and temporal information.
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
Extended variational theory of complex rays in heterogeneous Helmholtz problem
NASA Astrophysics Data System (ADS)
Li, Hao; Ladeveze, Pierre; Riou, Hervé
2017-02-01
In the past years, a numerical technique method called Variational Theory of Complex Rays (VTCR) has been developed for vibration problems in medium frequency. It is a Trefftz Discontinuous Galerkin method which uses plane wave functions as shape functions. However this method is only well developed in homogeneous case. In this paper, VTCR is extended to the heterogeneous Helmholtz problem by creating a new base of shape functions. Numerical examples give a scope of the performances of such an extension of VTCR.
Inverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians
NASA Astrophysics Data System (ADS)
Guseinov, Gusein Sh.
2009-02-01
In this paper, the concept of generalized spectral function is introduced for finite-order tridiagonal symmetric matrices (Jacobi matrices) with complex entries. The structure of the generalized spectral function is described in terms of spectral data consisting of the eigenvalues and normalizing numbers of the matrix. The inverse problems from generalized spectral function as well as from spectral data are investigated. In this way, a procedure for construction of complex tridiagonal matrices having real eigenvalues is obtained.
Theory of periodically specified problems: Complexity and approximability
Marathe, M.V.; Hunt, H.B. III; Stearns, R.E.; Rosenkrantz, D.J.
1997-12-05
We study the complexity and the efficient approximability of graph and satisfiability problems when specified using various kinds of periodic specifications studied. The general results obtained include the following: (1) We characterize the complexities of several basic generalized CNF satisfiability problems SAT(S) [Sc78], when instances are specified using various kinds of 1- and 2-dimensional periodic specifications. We outline how this characterization can be used to prove a number of new hardness results for the complexity classes DSPACE(n), NSPACE(n), DEXPTIME, NEXPTIME, EXPSPACE etc. These results can be used to prove in a unified way the hardness of a number of combinatorial problems when instances are specified succinctly using various succient specifications considered in the literature. As one corollary, we show that a number of basic NP-hard problems because EXPSPACE-hard when inputs are represented using 1-dimensional infinite periodic wide specifications. This answers a long standing open question posed by Orlin. (2) We outline a simple yet a general technique to devise approximation algorithms with provable worst case performance guarantees for a number of combinatorial problems specified periodically. Our efficient approximation algorithms and schemes are based on extensions of the ideas and represent the first non-trivial characterization of a class of problems having an {epsilon}-approximation (or PTAS) for periodically specified NEXPTIME-hard problems. Two of properties of our results are: (i) For the first time, efficient approximation algorithms and schemes have been developed for natural NEXPTIME-complete problems. (ii) Our results are the first polynomial time approximation algorithms with good performance guarantees for hard problems specified using various kinds of periodic specifications considered in this paper.
EEG activity during the performance of complex mental problems.
Jausovec, N; Jausovec, K
2000-04-01
This study investigated differences in cognitive processes related to problem complexity. It was assumed that these differences would be reflected in respondents' EEG activity--spectral power and coherence. A second issue of the study was to compare differences between the lower (alpha(1) = 7.9-10.0 Hz), and upper alpha band (alpha(2) = 10.1-12.9 Hz). In the first experiment two well-defined problems with two levels of complexity were used. Only minor differences in EEG power and coherence measures related to problem complexity were observed. In the second experiment divergent production problems resembling tasks on creativity tests were compared with dialectic problems calling for creative solutions. Differences in EEG power measures were mainly related to the form of problem presentation (figural/verbal). In contrast, coherence was related to the level of creativity needed to solve a problem. Noticeable increased intra- and interhemispheric cooperation between mainly the far distant brain regions was observed in the EEG activity of respondents while solving the dialectic problems. These results are explained by the more intense involvement of the long cortico-cortical fiber system in creative thinking. Differences between the lower and upper alpha band were significant for the power and coherence measures. In Experiment 2, fewer differences were observed in power measures in the upper alpha band than in the lower alpha band. A reverse pattern was observed for the coherence measures. These results hint to a functional independence of the two alpha bands, however, they do not allow to draw firm conclusions about their functional meanings. The study showed that it is unlikely that individuals solve well- and ill-defined problems by employing similar cognitive strategies.
Cook, Daniel L; Farley, Joel F; Tapscott, Stephen J
2001-01-01
Background: We propose that a computerized, internet-based graphical description language for systems biology will be essential for describing, archiving and analyzing complex problems of biological function in health and disease. Results: We outline here a conceptual basis for designing such a language and describe BioD, a prototype language that we have used to explore the utility and feasibility of this approach to functional biology. Using example models, we demonstrate that a rather limited lexicon of icons and arrows suffices to describe complex cell-biological systems as discrete models that can be posted and linked on the internet. Conclusions: Given available computer and internet technology, BioD may be implemented as an extensible, multidisciplinary language that can be used to archive functional systems knowledge and be extended to support both qualitative and quantitative functional analysis. PMID:11305940
Olae: A Bayesian Performance Assessment for Complex Problem Solving.
ERIC Educational Resources Information Center
VanLehn, Kurt
Olae is a computer system for assessing student knowledge of physics, and Newtonian mechanics in particular, using performance data collected while students solve complex problems. Although originally designed as a stand-alone system, it has also been used as part of the Andes intelligent tutoring system. Like many other performance assessment…
Investigating the Effect of Complexity Factors in Gas Law Problems
ERIC Educational Resources Information Center
Schuttlefield, Jennifer D.; Kirk, John; Pienta, Norbert J.; Tang, Hui
2012-01-01
Undergraduate students were asked to complete gas law questions using a Web-based tool as a first step in our understanding of the role of cognitive load in chemistry word questions and in helping us assess student problem-solving. Each question contained five different complexity factors, which were randomly assigned by the tool so that a…
Investigating the Effect of Complexity Factors in Gas Law Problems
ERIC Educational Resources Information Center
Schuttlefield, Jennifer D.; Kirk, John; Pienta, Norbert J.; Tang, Hui
2012-01-01
Undergraduate students were asked to complete gas law questions using a Web-based tool as a first step in our understanding of the role of cognitive load in chemistry word questions and in helping us assess student problem-solving. Each question contained five different complexity factors, which were randomly assigned by the tool so that a…
What Do Employers Pay for Employees' Complex Problem Solving Skills?
ERIC Educational Resources Information Center
Ederer, Peer; Nedelkoska, Ljubica; Patt, Alexander; Castellazzi, Silvia
2015-01-01
We estimate the market value that employers assign to the complex problem solving (CPS) skills of their employees, using individual-level Mincer-style wage regressions. For the purpose of the study, we collected new and unique data using psychometric measures of CPS and an extensive background questionnaire on employees' personal and work history.…
What Do Employers Pay for Employees' Complex Problem Solving Skills?
ERIC Educational Resources Information Center
Ederer, Peer; Nedelkoska, Ljubica; Patt, Alexander; Castellazzi, Silvia
2015-01-01
We estimate the market value that employers assign to the complex problem solving (CPS) skills of their employees, using individual-level Mincer-style wage regressions. For the purpose of the study, we collected new and unique data using psychometric measures of CPS and an extensive background questionnaire on employees' personal and work history.…
Application of NASA management approach to solve complex problems on earth
NASA Technical Reports Server (NTRS)
Potate, J. S.
1972-01-01
The application of NASA management approach to solving complex problems on earth is discussed. The management of the Apollo program is presented as an example of effective management techniques. Four key elements of effective management are analyzed. Photographs of the Cape Kennedy launch sites and supporting equipment are included to support the discussions.
A generalized topological entropy for analyzing the complexity of DNA sequences.
Jin, Shuilin; Tan, Renjie; Jiang, Qinghua; Xu, Li; Peng, Jiajie; Wang, Yong; Wang, Yadong
2014-01-01
Topological entropy is one of the most difficult entropies to be used to analyze the DNA sequences, due to the finite sample and high-dimensionality problems. In order to overcome these problems, a generalized topological entropy is introduced. The relationship between the topological entropy and the generalized topological entropy is compared, which shows the topological entropy is a special case of the generalized entropy. As an application the generalized topological entropy in introns, exons and promoter regions was computed, respectively. The results indicate that the entropy of introns is higher than that of exons, and the entropy of the exons is higher than that of the promoter regions for each chromosome, which suggest that DNA sequence of the promoter regions is more regular than the exons and introns.
A Generalized Topological Entropy for Analyzing the Complexity of DNA Sequences
Jiang, Qinghua; Xu, Li; Peng, Jiajie; Wang, Yong; Wang, Yadong
2014-01-01
Topological entropy is one of the most difficult entropies to be used to analyze the DNA sequences, due to the finite sample and high-dimensionality problems. In order to overcome these problems, a generalized topological entropy is introduced. The relationship between the topological entropy and the generalized topological entropy is compared, which shows the topological entropy is a special case of the generalized entropy. As an application the generalized topological entropy in introns, exons and promoter regions was computed, respectively. The results indicate that the entropy of introns is higher than that of exons, and the entropy of the exons is higher than that of the promoter regions for each chromosome, which suggest that DNA sequence of the promoter regions is more regular than the exons and introns. PMID:24533097
Analyzing Pre-Service Primary Teachers' Fraction Knowledge Structures through Problem Posing
ERIC Educational Resources Information Center
Kilic, Cigdem
2015-01-01
In this study it was aimed to determine pre-service primary teachers' knowledge structures of fraction through problem posing activities. A total of 90 pre-service primary teachers participated in this study. A problem posing test consisting of two questions was used and the participants were asked to generate as many as problems based on the…
The Complex Route to Success: Complex Problem-Solving Skills in the Prediction of University Success
ERIC Educational Resources Information Center
Stadler, Matthias J.; Becker, Nicolas; Greiff, Samuel; Spinath, Frank M.
2016-01-01
Successful completion of a university degree is a complex matter. Based on considerations regarding the demands of acquiring a university degree, the aim of this paper was to investigate the utility of complex problem-solving (CPS) skills in the prediction of objective and subjective university success (SUS). The key finding of this study was that…
The Complex Route to Success: Complex Problem-Solving Skills in the Prediction of University Success
ERIC Educational Resources Information Center
Stadler, Matthias J.; Becker, Nicolas; Greiff, Samuel; Spinath, Frank M.
2016-01-01
Successful completion of a university degree is a complex matter. Based on considerations regarding the demands of acquiring a university degree, the aim of this paper was to investigate the utility of complex problem-solving (CPS) skills in the prediction of objective and subjective university success (SUS). The key finding of this study was that…
Conway, Mike; Berg, Richard L.; Carrell, David; Denny, Joshua C.; Kho, Abel N.; Kullo, Iftikhar J.; Linneman, James G.; Pacheco, Jennifer A.; Peissig, Peggy; Rasmussen, Luke; Weston, Noah; Chute, Christopher G.; Pathak, Jyotishman
2011-01-01
The need for formal representations of eligibility criteria for clinical trials – and for phenotyping more generally – has been recognized for some time. Indeed, the availability of a formal computable representation that adequately reflects the types of data and logic evidenced in trial designs is a prerequisite for the automatic identification of study-eligible patients from Electronic Health Records. As part of the wider process of representation development, this paper reports on an analysis of fourteen Electronic Health Record oriented phenotyping algorithms (developed as part of the eMERGE project) in terms of their constituent data elements, types of logic used and temporal characteristics. We discovered that the majority of eMERGE algorithms analyzed include complex, nested boolean logic and negation, with several dependent on cardinality constraints and complex temporal logic. Insights gained from the study will be used to augment the CDISC Protocol Representation Model. PMID:22195079
Binocular adaptive optics vision analyzer with full control over the complex pupil functions.
Schwarz, Christina; Prieto, Pedro M; Fernández, Enrique J; Artal, Pablo
2011-12-15
We present a binocular adaptive optics vision analyzer fully capable of controlling both amplitude and phase of the two complex pupil functions in each eye of the subject. A special feature of the instrument is its comparatively simple setup. A single reflective liquid crystal on silicon spatial light modulator working in pure phase modulation generates the phase profiles for both pupils simultaneously. In addition, another liquid crystal spatial light modulator working in transmission operates in pure intensity modulation to produce a large variety of pupil masks for each eye. Subjects perform visual tasks through any predefined variations of the complex pupil function for both eyes. As an example of the system efficiency, we recorded images of the stimuli through the system as they were projected at the subject's retina. This instrument proves to be extremely versatile for designing and testing novel ophthalmic elements and simulating visual outcomes, as well as for further research of binocular vision.
Analyzing networks of phenotypes in complex diseases: methodology and applications in COPD
2014-01-01
Background The investigation of complex disease heterogeneity has been challenging. Here, we introduce a network-based approach, using partial correlations, that analyzes the relationships among multiple disease-related phenotypes. Results We applied this method to two large, well-characterized studies of chronic obstructive pulmonary disease (COPD). We also examined the associations between these COPD phenotypic networks and other factors, including case-control status, disease severity, and genetic variants. Using these phenotypic networks, we have detected novel relationships between phenotypes that would not have been observed using traditional epidemiological approaches. Conclusion Phenotypic network analysis of complex diseases could provide novel insights into disease susceptibility, disease severity, and genetic mechanisms. PMID:24964944
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher
2005-01-01
This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.
Complexity and efficient approximability of two dimensional periodically specified problems
Marathe, M.V.; Hunt, H.B. III; Stearns, R.E.
1996-09-01
The authors consider the two dimensional periodic specifications: a method to specify succinctly objects with highly regular repetitive structure. These specifications arise naturally when processing engineering designs including VLSI designs. These specifications can specify objects whose sizes are exponentially larger than the sizes of the specification themselves. Consequently solving a periodically specified problem by explicitly expanding the instance is prohibitively expensive in terms of computational resources. This leads one to investigate the complexity and efficient approximability of solving graph theoretic and combinatorial problems when instances are specified using two dimensional periodic specifications. They prove the following results: (1) several classical NP-hard optimization problems become NEXPTIME-hard, when instances are specified using two dimensional periodic specifications; (2) in contrast, several of these NEXPTIME-hard problems have polynomial time approximation algorithms with guaranteed worst case performance.
Integrating perception and problem solving to predict complex object behaviours
NASA Astrophysics Data System (ADS)
Lyons, Damian M.; Chaudhry, Sirhan; Agica, Marius; Monaco, John Vincent
2010-04-01
One of the objectives of Cognitive Robotics is to construct robot systems that can be directed to achieve realworld goals by high-level directions rather than complex, low-level robot programming. Such a system must have the ability to represent, problem-solve and learn about its environment as well as communicate with other agents. In previous work, we have proposed ADAPT, a Cognitive Architecture that views perception as top-down and goaloriented and part of the problem solving process. Our approach is linked to a SOAR-based problem-solving and learning framework. In this paper, we present an architecture for the perceptive and world modelling components of ADAPT and report on experimental results using this architecture to predict complex object behaviour. A novel aspect of our approach is a 'mirror system' that ensures that the modelled background and foreground objects are synchronized with observations and task-based expectations. This is based on our prior work on comparing real and synthetic images. We show results for a moving object that collides and rebounds from its environment, hence showing that this perception-based problem solving approach has the potential to be used to predict complex object motions.
How Humans Solve Complex Problems: The Case of the Knapsack Problem
Murawski, Carsten; Bossaerts, Peter
2016-01-01
Life presents us with problems of varying complexity. Yet, complexity is not accounted for in theories of human decision-making. Here we study instances of the knapsack problem, a discrete optimisation problem commonly encountered at all levels of cognition, from attention gating to intellectual discovery. Complexity of this problem is well understood from the perspective of a mechanical device like a computer. We show experimentally that human performance too decreased with complexity as defined in computer science. Defying traditional economic principles, participants spent effort way beyond the point where marginal gain was positive, and economic performance increased with instance difficulty. Human attempts at solving the instances exhibited commonalities with algorithms developed for computers, although biological resource constraints–limited working and episodic memories–had noticeable impact. Consistent with the very nature of the knapsack problem, only a minority of participants found the solution–often quickly–but the ones who did appeared not to realise. Substantial heterogeneity emerged, suggesting why prizes and patents, schemes that incentivise intellectual discovery but discourage information sharing, have been found to be less effective than mechanisms that reveal private information, such as markets. PMID:27713516
[Problems of protecting complex inventions in the field of microbiology].
Korovkin, V I
1978-01-01
Some problems are discussed which are connected with the protection of inventions in the field of microbiology when the invention is complex. The rights of the author are determined when a method and a product are to be protected at the same time. The additional juridical protection of a microbial strain is not necessary. The complex protection of a microbial strain and the method of its utilization is recommended in certain cases since it might prevent conflicts which arise upon the parallel juridical protection of a strain and the method of its utilization.
COMPLEXITY & APPROXIMABILITY OF QUANTIFIED & STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS
H. B. HUNT; M. V. MARATHE; R. E. STEARNS
2001-06-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C,S,T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94] Our techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-Q-SAT(S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93, CF+94, Cr95, KSW97]. Keywords: NP-hardness; Approximation Algorithms; PSPACE-hardness; Quantified and Stochastic Constraint Satisfaction Problems.
Data Mining and Complex Problems: Case Study in Composite Materials
NASA Technical Reports Server (NTRS)
Rabelo, Luis; Marin, Mario
2009-01-01
Data mining is defined as the discovery of useful, possibly unexpected, patterns and relationships in data using statistical and non-statistical techniques in order to develop schemes for decision and policy making. Data mining can be used to discover the sources and causes of problems in complex systems. In addition, data mining can support simulation strategies by finding the different constants and parameters to be used in the development of simulation models. This paper introduces a framework for data mining and its application to complex problems. To further explain some of the concepts outlined in this paper, the potential application to the NASA Shuttle Reinforced Carbon-Carbon structures and genetic programming is used as an illustration.
Data Mining and Complex Problems: Case Study in Composite Materials
NASA Technical Reports Server (NTRS)
Rabelo, Luis; Marin, Mario
2009-01-01
Data mining is defined as the discovery of useful, possibly unexpected, patterns and relationships in data using statistical and non-statistical techniques in order to develop schemes for decision and policy making. Data mining can be used to discover the sources and causes of problems in complex systems. In addition, data mining can support simulation strategies by finding the different constants and parameters to be used in the development of simulation models. This paper introduces a framework for data mining and its application to complex problems. To further explain some of the concepts outlined in this paper, the potential application to the NASA Shuttle Reinforced Carbon-Carbon structures and genetic programming is used as an illustration.
Complexity and approximability of quantified and stochastic constraint satisfaction problems
Hunt, H. B.; Stearns, R. L.; Marathe, M. V.
2001-01-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S. These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4
Gensler, Manuel; Eidamshaus, Christian; Taszarek, Maurice; Reissig, Hans-Ulrich
2015-01-01
Summary Multivalent biomolecular interactions allow for a balanced interplay of mechanical stability and malleability, and nature makes widely use of it. For instance, systems of similar thermal stability may have very different rupture forces. Thus it is of paramount interest to study and understand the mechanical properties of multivalent systems through well-characterized model systems. We analyzed the rupture behavior of three different bivalent pyridine coordination complexes with Cu2+ in aqueous environment by single-molecule force spectroscopy. Those complexes share the same supramolecular interaction leading to similar thermal off-rates in the range of 0.09 and 0.36 s−1, compared to 1.7 s−1 for the monovalent complex. On the other hand, the backbones exhibit different flexibility, and we determined a broad range of rupture lengths between 0.3 and 1.1 nm, with higher most-probable rupture forces for the stiffer backbones. Interestingly, the medium-flexible connection has the highest rupture forces, whereas the ligands with highest and lowest rigidity seem to be prone to consecutive bond rupture. The presented approach allows separating bond and backbone effects in multivalent model systems. PMID:26124883
Simpson, Sean L.; Bowman, F. DuBois; Laurienti, Paul J.
2014-01-01
Complex functional brain network analyses have exploded over the last decade, gaining traction due to their profound clinical implications. The application of network science (an interdisciplinary offshoot of graph theory) has facilitated these analyses and enabled examining the brain as an integrated system that produces complex behaviors. While the field of statistics has been integral in advancing activation analyses and some connectivity analyses in functional neuroimaging research, it has yet to play a commensurate role in complex network analyses. Fusing novel statistical methods with network-based functional neuroimage analysis will engender powerful analytical tools that will aid in our understanding of normal brain function as well as alterations due to various brain disorders. Here we survey widely used statistical and network science tools for analyzing fMRI network data and discuss the challenges faced in filling some of the remaining methodological gaps. When applied and interpreted correctly, the fusion of network scientific and statistical methods has a chance to revolutionize the understanding of brain function. PMID:25309643
Gensler, Manuel; Eidamshaus, Christian; Taszarek, Maurice; Reissig, Hans-Ulrich; Rabe, Jürgen P
2015-01-01
Multivalent biomolecular interactions allow for a balanced interplay of mechanical stability and malleability, and nature makes widely use of it. For instance, systems of similar thermal stability may have very different rupture forces. Thus it is of paramount interest to study and understand the mechanical properties of multivalent systems through well-characterized model systems. We analyzed the rupture behavior of three different bivalent pyridine coordination complexes with Cu(2+) in aqueous environment by single-molecule force spectroscopy. Those complexes share the same supramolecular interaction leading to similar thermal off-rates in the range of 0.09 and 0.36 s(-1), compared to 1.7 s(-1) for the monovalent complex. On the other hand, the backbones exhibit different flexibility, and we determined a broad range of rupture lengths between 0.3 and 1.1 nm, with higher most-probable rupture forces for the stiffer backbones. Interestingly, the medium-flexible connection has the highest rupture forces, whereas the ligands with highest and lowest rigidity seem to be prone to consecutive bond rupture. The presented approach allows separating bond and backbone effects in multivalent model systems.
Complexity of hierarchically and 1-dimensional periodically specified problems
Marathe, M.V.; Hunt, H.B. III; Stearns, R.E.; Radhakrishnan, V.
1995-08-23
We study the complexity of various combinatorial and satisfiability problems when instances are specified using one of the following specifications: (1) the 1-dimensional finite periodic narrow specifications of Wanke and Ford et al. (2) the 1-dimensional finite periodic narrow specifications with explicit boundary conditions of Gale (3) the 2-way infinite1-dimensional narrow periodic specifications of Orlin et al. and (4) the hierarchical specifications of Lengauer et al. we obtain three general types of results. First, we prove that there is a polynomial time algorithm that given a 1-FPN- or 1-FPN(BC)specification of a graph (or a C N F formula) constructs a level-restricted L-specification of an isomorphic graph (or formula). This theorem along with the hardness results proved here provides alternative and unified proofs of many hardness results proved in the past either by Lengauer and Wagner or by Orlin. Second, we study the complexity of generalized CNF satisfiability problems of Schaefer. Assuming P {ne} PSPACE, we characterize completely the polynomial time solvability of these problems, when instances are specified as in (1), (2),(3) or (4). As applications of our first two types of results, we obtain a number of new PSPACE-hardness and polynomial time algorithms for problems specified as in (1), (2), (3) or(4). Many of our results also hold for O(log N) bandwidth bounded planar instances.
COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS
Hunt, H. B.; Marathe, M. V.; Stearns, R. E.
2001-01-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97
TOPAZ - the transient one-dimensional pipe flow analyzer: code validation and sample problems
Winters, W.S.
1985-10-01
TOPAZ is a ''user friendly'' computer code for modeling the one-dimensional-transient physics of multi-species gas transfer in arbitrary arrangements of pipes, valves, vessels, and flow branches. This document presents a series of sample problems designed to aid potential users in creating TOPAZ input files. To the extent possible, sample problems were selected for which analytical solutions currently exist. TOPAZ comparisons with such solutions are intended to provide a measure of code validation.
Unpacking complexity in the analysis of environmental and geologic problems
Pinet, P.R. . Geology Dept.)
1992-01-01
In order to understand or to make policy decisions about environmental issues, it is imperative that the only complexity of the problem be unpacked, that is its causes and effects be separated into a natural hierarchical scheme. Unpacking complexity separates the elements that affect natural systems into primary, secondary, and tertiary factors. Primary factors are universal in the sense that they operate in a fundamental way anywhere on the globe where the system is present. Secondary factors interact with the primary elements to infuse regional characteristics into the system. Local (site-specific) factors impose a tertiary level of complexity that operates on small spatial scales. The utility of this technique will be demonstrated by several examples: the origin of an Atlantic-type continental margin, a beach-erosion study, and a groundwater investigation. The appraisal of environmental problems on a truly global scale involves the evaluation of the primary elements of the system, and a de-emphasis of the secondary and tertiary factors which are inappropriate to the scale of the study. On the other hand, policy decisions regarding a regional coastal-erosion problem or the management of a large watershed require that primary and secondary elements be addressed, and that tertiary factors be put aside. Moreover, assessing the nature of erosion at a specific beach or managing a local tract of woodland must include a consideration of all causes and effects that occur at primary, secondary, and tertiary levels. This hierarchical analysis applies to temporal scales as well. For example, solutions to beach-erosion or deforestation problems are very different when considering causes and effects over years, decades, centuries, or millennia.
Solving and analyzing side-chain positioning problems using linear and integer programming.
Kingsford, Carleton L; Chazelle, Bernard; Singh, Mona
2005-04-01
Side-chain positioning is a central component of homology modeling and protein design. In a common formulation of the problem, the backbone is fixed, side-chain conformations come from a rotamer library, and a pairwise energy function is optimized. It is NP-complete to find even a reasonable approximate solution to this problem. We seek to put this hardness result into practical context. We present an integer linear programming (ILP) formulation of side-chain positioning that allows us to tackle large problem sizes. We relax the integrality constraint to give a polynomial-time linear programming (LP) heuristic. We apply LP to position side chains on native and homologous backbones and to choose side chains for protein design. Surprisingly, when positioning side chains on native and homologous backbones, optimal solutions using a simple, biologically relevant energy function can usually be found using LP. On the other hand, the design problem often cannot be solved using LP directly; however, optimal solutions for large instances can still be found using the computationally more expensive ILP procedure. While different energy functions also affect the difficulty of the problem, the LP/ILP approach is able to find optimal solutions. Our analysis is the first large-scale demonstration that LP-based approaches are highly effective in finding optimal (and successive near-optimal) solutions for the side-chain positioning problem.
NASA Astrophysics Data System (ADS)
Martins, Luis Gustavo Nogueira; Stefanello, Michel Baptistella; Degrazia, Gervásio Annes; Acevedo, Otávio Costa; Puhales, Franciano Scremin; Demarco, Giuliano; Mortarini, Luca; Anfossi, Domenico; Roberti, Débora Regina; Denardin, Felipe Costa; Maldaner, Silvana
2016-11-01
In this study we analyze natural complex signals employing the Hilbert-Huang spectral analysis. Specifically, low wind meandering meteorological data are decomposed into turbulent and non turbulent components. These non turbulent movements, responsible for the absence of a preferential direction of the horizontal wind, provoke negative lobes in the meandering autocorrelation functions. The meandering characteristic time scales (meandering periods) are determined from the spectral peak provided by the Hilbert-Huang marginal spectrum. The magnitudes of the temperature and horizontal wind meandering period obtained agree with the results found from the best fit of the heuristic meandering autocorrelation functions. Therefore, the new method represents a new procedure to evaluate meandering periods that does not employ mathematical expressions to represent observed meandering autocorrelation functions.
Complex Genotype Mixtures Analyzed by Deep Sequencing in Two Different Regions of Hepatitis B Virus.
Caballero, Andrea; Gregori, Josep; Homs, Maria; Tabernero, David; Gonzalez, Carolina; Quer, Josep; Blasi, Maria; Casillas, Rosario; Nieto, Leonardo; Riveiro-Barciela, Mar; Esteban, Rafael; Buti, Maria; Rodriguez-Frias, Francisco
2015-01-01
This study assesses the presence and outcome of genotype mixtures in the polymerase/surface and X/preCore regions of the HBV genome in patients with chronic hepatitis B virus (HBV) infection. Thirty samples from ten chronic hepatitis B patients were included. The polymerase/surface and X/preCore regions were analyzed by deep sequencing (UDPS) in the first available sample at diagnosis, a pre-treatment sample, and a sample while under treatment. HBV genotype was determined by phylogenesis. Quasispecies complexity was evaluated by mutation frequency and nucleotide diversity. The polymerase/surface and X/preCore regions were validated for genotyping from 113 GenBank reference sequences. UDPS yielded a median of 10,960 sequences per sample (IQR 16,645) in the polymerase/surface region and 11,595 sequences per sample (IQR 14,682) in X/preCore. Genotype mixtures were more common in X/preCore (90%) than in polymerase/surface (30%) (p<0.001). On X/preCore genotyping, all samples were genotype A, whereas polymerase/surface yielded genotypes A (80%), D (16.7%), and F (3.3%) (p = 0.036). Genotype changes in polymerase/surface were observed in four patients during natural quasispecies dynamics and in two patients during treatment. There were no genotype changes in X/preCore. Quasispecies complexity was higher in X/preCore than in polymerase/surface (p = 0.004). The results provide evidence of genotype mixtures and differential genotype proportions in the polymerase/surface and X/preCore regions. The genotype dynamics in HBV infection and the different patterns of quasispecies complexity in the HBV genome suggest a new paradigm for HBV genotype classification.
Complex Genotype Mixtures Analyzed by Deep Sequencing in Two Different Regions of Hepatitis B Virus
Homs, Maria; Tabernero, David; Gonzalez, Carolina; Quer, Josep; Blasi, Maria; Casillas, Rosario; Nieto, Leonardo; Riveiro-Barciela, Mar; Esteban, Rafael; Buti, Maria; Rodriguez-Frias, Francisco
2015-01-01
This study assesses the presence and outcome of genotype mixtures in the polymerase/surface and X/preCore regions of the HBV genome in patients with chronic hepatitis B virus (HBV) infection. Thirty samples from ten chronic hepatitis B patients were included. The polymerase/surface and X/preCore regions were analyzed by deep sequencing (UDPS) in the first available sample at diagnosis, a pre-treatment sample, and a sample while under treatment. HBV genotype was determined by phylogenesis. Quasispecies complexity was evaluated by mutation frequency and nucleotide diversity. The polymerase/surface and X/preCore regions were validated for genotyping from 113 GenBank reference sequences. UDPS yielded a median of 10,960 sequences per sample (IQR 16,645) in the polymerase/surface region and 11,595 sequences per sample (IQR 14,682) in X/preCore. Genotype mixtures were more common in X/preCore (90%) than in polymerase/surface (30%) (p<0.001). On X/preCore genotyping, all samples were genotype A, whereas polymerase/surface yielded genotypes A (80%), D (16.7%), and F (3.3%) (p = 0.036). Genotype changes in polymerase/surface were observed in four patients during natural quasispecies dynamics and in two patients during treatment. There were no genotype changes in X/preCore. Quasispecies complexity was higher in X/preCore than in polymerase/surface (p = 0.004). The results provide evidence of genotype mixtures and differential genotype proportions in the polymerase/surface and X/preCore regions. The genotype dynamics in HBV infection and the different patterns of quasispecies complexity in the HBV genome suggest a new paradigm for HBV genotype classification. PMID:26714168
NASA Astrophysics Data System (ADS)
Walker, David N.; Fernsler, Richard F.; Blackwell, David D.; Amatucci, William E.
2008-11-01
In earlier work, using a network analyzer, we have shown the existence of collisionless resistance (CR) in the sheath of a spherical probe when driven by a small rf signal. As shown in that paper the CR depends on the plasma density gradient at a given location. Because of this there is a cutoff in the CR which is proportional to the applied bias level and which will occur at the plasma frequency at the surface of the probe, r = r0. We show that, in the frequency regime φpi<<φ<<φpe(r0), the complex impedance measurements made with a network analyzer can be used to determine electron temperature. We present an overview of the theory used along with comparisons to data sets made using three small spherical probes of different sizes. The numerical algorithm requires only a solution of the Poisson equation to determine the approximate sheath dimensions and integrals to determine approximate plasma and sheath inductances. We compare the results of the temperature measurements to those made by conventional Langmuir probe sweeps. Walker, D.N., R.F. Fernsler, D.D. Blackwell, W.E. Amatucci, S.J. Messer, Phys of Plasmas, 13, 032108 (2006)
An approach to complex acid-base problems
Herd, Anthony M.
2005-01-01
OBJECTIVE To review rules and formulas for solving even the most complex acid-base problems. SOURCES OF INFORMATION MEDLINE was searched from January 1966 to December 2003. The search was limited to English-language review articles involving human subjects. Nine relevant review papers were found and provide the background. As this information is well established and widely accepted, it is not judged for strength of evidence, as is standard practice. MAIN MESSAGE An understanding of the body’s responses to acidemia or alkalemia can be gained through a set of four rules and two formulas that can be used to interpret almost any acid-base problems. Physicians should, however, remember the “golden rule” of acid-base interpretation: always look at a patient’s clinical condition. CONCLUSION Physicians practising in acute care settings commonly encounter acid-base disturbances. While some of these are relatively simple and easy to interpret, some are more complex. Even complex cases can be resolved using the four rules and two formulas. PMID:15751566
ERIC Educational Resources Information Center
Turner, Rita; Donnelly, Ryan
2013-01-01
This article outlines the features and application of a set of model curriculum materials that utilize eco-democratic principles and humanities-based content to cultivate critical analysis of the cultural foundations of socio-environmental problems. We first describe the goals and components of the materials, then discuss results of their use in…
Analyzing Energy and Resource Problems: An Interdisciplinary Approach to Mathematical Modeling.
ERIC Educational Resources Information Center
Fishman, Joseph
1993-01-01
Suggests ways in which mathematical models can be presented and developed in the classroom to promote discussion, analysis, and understanding of issues related to energy consumption. Five problems deal with past trends and future projections of availability of a nonrenewable resource, natural gas. (Contains 13 references.) (MDH)
Using a Database to Analyze Core Basic Science Content in a Problem-Based Curriculum.
ERIC Educational Resources Information Center
Rosen, Robert L.; And Others
1992-01-01
A study used computer analysis to examine distribution of basic science content in the 53 cases in the problem-based medical curriculum of Rush Medical College (Illinois) and compared it to application of that content by students and faculty. The method of analysis is recommended for reviewing curricula for omissions and redundancy. (Author/MSE)
ERIC Educational Resources Information Center
Turner, Rita; Donnelly, Ryan
2013-01-01
This article outlines the features and application of a set of model curriculum materials that utilize eco-democratic principles and humanities-based content to cultivate critical analysis of the cultural foundations of socio-environmental problems. We first describe the goals and components of the materials, then discuss results of their use in…
Complexity and approximability of certain bicriteria location problems
Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.
1995-10-01
We investigate the complexity and approximability of some location problems when two distance values are specified for each pair of potential sites. These problems involve the selection of a specified number of facilities (i.e. a placement of a specified size) to minimize a function of one distance metric subject to a budget constraint on the other distance metric. Such problems arise in several application areas including statistical clustering, pattern recognition and load-balancing in distributed systems. We show that, in general, obtaining placements that are near-optimal with respect to the first distance metric is NP-hard even when we allow the budget constraint on the second distance metric to be violated by a constant factor. However, when both the distance metrics satisfy the triangle inequality, we present approximation algorithms that produce placements which are near-optimal with respect to the first distance metric while violating the budget constraint only by a small constant factor. We also present polynomial algorithms for these problems when the underlying graph is a tree.
Coordinating complex problem-solving among distributed intelligent agents
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1992-01-01
A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.
Intelligent Tutoring for Diagnostic Problem Solving in Complex Dynamic Systems
1991-09-01
tre cOllecti0n Of ormtio. n he r n tmt or er e Of this C ,ollectof of inorma o. Icluding suggeiont for reducing tis burden. to Watiington N _dgar i...AND SUBTITLE S. FUNDING NUMBERS Intelligent Tutoring for Diagnostic Problem Solving in Complex Dynamic Systems C : N00014-87-K-0482 6. AUTHOR(S) PE...Chronister, Sally Cohen, Ed Crowther, Kelly Deyoe, Suzanne Dilley, Brenda Downs, Janet Fath, Dick Henneman , Patty Jones, Merrick Kossack, Steve Krosner
NASA Astrophysics Data System (ADS)
Kastens, K. A.; Krumhansl, R.
2016-12-01
The Next Generation Science Standards incorporate a stronger emphasis on having students work with data than did prior standards. This emphasis is most obvious in Practice 4: Analyzing and Interpreting Data, but also permeates performance expectations built on Practice 2 when students test models, Practice 6 when students construct explanations, and Practice 7 when student test claims with evidence. To support curriculum developers who wish to guide high school students towards more sophisticated engagement with complex data, we analyzed a well-regarded body of instructional materials designed for use in introductory college courses (http://serc.carleton.edu/integrate/teaching_materials/). Our analysis sought design patterns that can be reused for a variety of topics at the high school or college level. We found five such patterns, each of which was used in at least half of the modules analyzed. We describe each pattern, provide an example, and hypothesize a theory of action that could explain how the sequence of activities leverages known perceptual, cognitive and/or social processes to foster learning from and about data. In order from most to least frequent, the observed design patterns are as follows: In Data Puzzles, students respond to guiding questions about high-value snippets of data pre-selected and sequenced by the curriculum developer to lead to an Aha! inference. In Pooling Data to See the Big Picture, small groups analyze different instances of analogous phenomenon (e.g. different hurricanes, or different divergent plate boundaries) and pool their insights to extract the commonalities that constitute the essence of that phenomenon. In Make a Decision or Recommendation, students combine geoscience data with other factors (such as economic or environmental justice concerns) to make a decision or recommendation about a human or societal action. In Predict-Observe-Explain, students make a prediction about what the Earth will look like under conditions
Radio interferometric gain calibration as a complex optimization problem
NASA Astrophysics Data System (ADS)
Smirnov, O. M.; Tasse, C.
2015-05-01
Recent developments in optimization theory have extended some traditional algorithms for least-squares optimization of real-valued functions (Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions of a complex variable. This employs a formalism called the Wirtinger derivative, and derives a full-complex Jacobian counterpart to the conventional real Jacobian. We apply these developments to the problem of radio interferometric gain calibration, and show how the general complex Jacobian formalism, when combined with conventional optimization approaches, yields a whole new family of calibration algorithms, including those for the polarized and direction-dependent gain regime. We further extend the Wirtinger calculus to an operator-based matrix calculus for describing the polarized calibration regime. Using approximate matrix inversion results in computationally efficient implementations; we show that some recently proposed calibration algorithms such as STEFCAL and peeling can be understood as special cases of this, and place them in the context of the general formalism. Finally, we present an implementation and some applied results of COHJONES, another specialized direction-dependent calibration algorithm derived from the formalism.
Improving Analyzing Skills of Primary Students Using a Problem Solving Strategy
ERIC Educational Resources Information Center
Cabanilla-Pedro, Lily Ann; Acob-Navales, Margelyn; Josue, Fe T.
2004-01-01
This study makes use of an action research paradigm to improve primary students' analyzing skills. It was conducted at the San Esteban Elementary School, Region I, Philippines, during the 6-week off campus practice teaching of one of the researchers. Sources of data include a thinking skills checklist, a set of Curriculum Support Materials (CSM),…
ERIC Educational Resources Information Center
Karp, Alexander
2010-01-01
This article analyzes the experiences of prospective secondary mathematics teachers during a teaching methods course, offered prior to their student teaching, but involving actual teaching and reflexive analysis of this teaching. The study focuses on the pedagogical difficulties that arose during their teaching, in which prospective teachers…
ERIC Educational Resources Information Center
Karp, Alexander
2010-01-01
This article analyzes the experiences of prospective secondary mathematics teachers during a teaching methods course, offered prior to their student teaching, but involving actual teaching and reflexive analysis of this teaching. The study focuses on the pedagogical difficulties that arose during their teaching, in which prospective teachers…
Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems
NASA Technical Reports Server (NTRS)
Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael
2013-01-01
The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.
A Real Space Cellular Automaton Laboratory (ReSCAL) to analyze complex geophysical systems
NASA Astrophysics Data System (ADS)
Rozier, O.; Narteau, C.
2012-04-01
The Real Space Cellular Automaton Laboratory (ReSCAL) is a generator of 3D multiphysics, markovian and stochastic cellular automata with continuous time. The objective of this new software released under a GNU licence is to develop interdisciplinary research collaboration to investigate the dynamics of complex geophysical systems. In a vast majority of cases, a numerical model is a set of physical variables (temperature, pressure, velocity, etc...) that are recalculated over time according to some predetermined rules or equations. Then, any point in space is entirely characterized by a local set of parameters. This is not the case in ReSCAL where the only local variable is a state-parameter that represent the different phases involved in the problem. An elementary cell represent a given volume of real-space. Pairs of nearest neighbour cells are called doublet. For each individual physical process that we take into account, there is a set of doublet transitions. Using this approach we can model a wide range of physical-chemical or anthropological processes. Here, we present different ingredients of ReSCAL using published applications in geosciences (Narteau et al. 2001 and 2009). We also show how ReSCAL can be developped and used across many displines in geophysics and physical geography. Supplementary informations: Sources files of ReSCAL can be download on http://www.ipgp.fr/~rozier/ReSCAL/rescal-en.html
Smith, Helen J; Chen, Jing; Liu, Xiaoyun
2008-01-01
In collaborative qualitative research in Asia, data are usually collected in the national language, and this poses challenges for analysis. Translation of transcripts to a language common to the whole research team is time consuming and expensive; meaning can easily be lost in translation; and validity of the data may be compromised in this process. We draw on several published examples from public health research conducted in mainland China, to highlight how language can influence rigour in the qualitative research process; for each problem we suggest potential solutions based on the methods used in one of our research projects in China. Problems we have encountered include obtaining sufficient depth and detail in qualitative data; deciding on language for data collection; managing data collected in Mandarin; and the influence of language on interpreting meaning. We have suggested methods for overcoming problems associated with collecting, analysing, and interpreting qualitative data in a local language, that we think help maintain analytical openness in collaborative qualitative research. We developed these methods specifically in research conducted in Mandarin in mainland China; but they need further testing in other countries with data collected in other languages. Examples from other researchers are needed. PMID:18616812
Smith, Helen J; Chen, Jing; Liu, Xiaoyun
2008-07-10
In collaborative qualitative research in Asia, data are usually collected in the national language, and this poses challenges for analysis. Translation of transcripts to a language common to the whole research team is time consuming and expensive; meaning can easily be lost in translation; and validity of the data may be compromised in this process. We draw on several published examples from public health research conducted in mainland China, to highlight how language can influence rigour in the qualitative research process; for each problem we suggest potential solutions based on the methods used in one of our research projects in China. Problems we have encountered include obtaining sufficient depth and detail in qualitative data; deciding on language for data collection; managing data collected in Mandarin; and the influence of language on interpreting meaning. We have suggested methods for overcoming problems associated with collecting, analysing, and interpreting qualitative data in a local language, that we think help maintain analytical openness in collaborative qualitative research. We developed these methods specifically in research conducted in Mandarin in mainland China; but they need further testing in other countries with data collected in other languages. Examples from other researchers are needed.
A robust interrupted time series model for analyzing complex health care intervention data.
Cruz, Maricela; Bender, Miriam; Ombao, Hernando
2017-08-29
Current health policy calls for greater use of evidence-based care delivery services to improve patient quality and safety outcomes. Care delivery is complex, with interacting and interdependent components that challenge traditional statistical analytic techniques, in particular, when modeling a time series of outcomes data that might be "interrupted" by a change in a particular method of health care delivery. Interrupted time series (ITS) is a robust quasi-experimental design with the ability to infer the effectiveness of an intervention that accounts for data dependency. Current standardized methods for analyzing ITS data do not model changes in variation and correlation following the intervention. This is a key limitation since it is plausible for data variability and dependency to change because of the intervention. Moreover, present methodology either assumes a prespecified interruption time point with an instantaneous effect or removes data for which the effect of intervention is not fully realized. In this paper, we describe and develop a novel robust interrupted time series (robust-ITS) model that overcomes these omissions and limitations. The robust-ITS model formally performs inference on (1) identifying the change point; (2) differences in preintervention and postintervention correlation; (3) differences in the outcome variance preintervention and postintervention; and (4) differences in the mean preintervention and postintervention. We illustrate the proposed method by analyzing patient satisfaction data from a hospital that implemented and evaluated a new nursing care delivery model as the intervention of interest. The robust-ITS model is implemented in an R Shiny toolbox, which is freely available to the community. Copyright © 2017 John Wiley & Sons, Ltd.
[Wide QRS complex tachycardia: an old and new problem].
Oreto, Giuseppe; Luzza, Francesco; Satullo, Gaetano; Donato, Antonino; Carbone, Vincenzo; Calabrò, Maria Pia
2009-09-01
The correct diagnosis of wide QRS complex tachycardia is an old problem, but it is still a new problem since no simple approach aimed at solving it is up to now available, despite the amount of research devoted to this topic. A wide QRS tachycardia can be: 1) ventricular tachycardia; 2) supraventricular tachycardia with bundle branch block that may be either preexisting or due to aberrant conduction, namely tachycardia-dependent; a further possibility is the effect of antiarrhythmic drugs, which slow down intraventricular conduction, resulting in marked QRS complex widening; 3) supraventricular tachycardia with conduction of impulses to the ventricles over an accessory pathway (preexcited tachycardia). The origin of a wide QRS complex tachycardia can be reliably identified using a "holistic" approach, namely taking into account all of the available items: no single criterion, thus, is able to provide a simple and quick solution to the problem in all cases. The electrocardiographic signs are, without any exception, suggestive of ectopy, namely ventricular origin of the impulses; supraventricular tachycardia with aberrant conduction may be diagnosed only by excluding all of the items favoring ectopy. The classic diagnostic criteria include: 1) atrio-ventricular dissociation, characterized by absence of any relationship between QRS complexes and P waves; this phenomenon is at times immediately recognizable but more often can be discovered only by means of a detailed analysis of the tracing; 2) second degree ventriculo-atrial block, characterized by a relationship between QRS complexes and P waves, but with more ventricular complexes than P waves; 3) fusion and/or capture beats; 4) concordant precordial pattern, a sign that can be also expressed as absence of RS (or even rs, Rs, rS) complexes in the precordial leads; 5) an interval > 100 ms from the beginning of the QRS complex to the nadir of S wave in any precordial lead. Vagal maneuvers and analysis of previous ECGs
ERIC Educational Resources Information Center
Buxton, Cory A.; Salinas, Alejandra; Mahotiere, Margarette; Lee, Okhee; Secada, Walter G.
2013-01-01
Grounded in teacher professional development addressing the intersection of student diversity and content area instruction, this study examined school teachers' pedagogical reasoning complexity as they reflected on their second language learners' science problem solving abilities using both home and school contexts. Teachers responded to interview…
ERIC Educational Resources Information Center
Buxton, Cory A.; Salinas, Alejandra; Mahotiere, Margarette; Lee, Okhee; Secada, Walter G.
2013-01-01
Grounded in teacher professional development addressing the intersection of student diversity and content area instruction, this study examined school teachers' pedagogical reasoning complexity as they reflected on their second language learners' science problem solving abilities using both home and school contexts. Teachers responded to interview…
The problem of motivating teaching staff in a complex amalgamation.
Kenrick, M A
1993-09-01
This paper addresses some of the problems brought about by the merger of a number of schools of nursing into a new complex amalgamation. A very real concern in the new colleges of nursing and midwifery in the United Kingdom is the effect of amalgamation on management systems and staff morale. The main focus of this paper is the motivation of staff during this time of change. There is currently a lack of security amongst staff and in many instances the personal job satisfaction of nurse teachers and managers of nurse education has been reduced, which has made the task of motivating staff difficult. Hence, two major theories of motivation and the implications of these theories for managers of nurse education are discussed. The criteria used for the selection of managers within the new colleges, leadership styles and organizational structures are reviewed. The amalgamations have brought about affiliation with higher-education institutions. Some problems associated with these mergers and the effects on the motivation of staff both within the higher-education institutions and the nursing colleges are outlined. Strategies for overcoming some of the problems are proposed including job enlargement, job enrichment, potential achievement rewards and the use of individual performance reviews which may be useful for assessing the ability of all staff, including managers, in the new amalgamations.
Complexity analysis of pipeline mapping problems in distributed heterogeneous networks
Lin, Ying; Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S
2009-04-01
Largescale scientific applications require using various system resources to execute complex computing pipelines in distributed networks to support collaborative research. System resources are typically shared in the Internet or over dedicated connections based on their location, availability, capability, and capacity. Optimizing the network performance of computing pipelines in such distributed environments is critical to the success of these applications. We consider two types of largescale distributed applications: (1) interactive applications where a single dataset is sequentially processed along a pipeline; and (2) streaming applications where a series of datasets continuously flow through a pipeline. The computing pipelines of these applications consist of a number of modules executed in a linear order in network environments with heterogeneous resources under different constraints. Our goal is to find an efficient mapping scheme that allocates the modules of a pipeline to network nodes for minimum endtoend delay or maximum frame rate. We formulate the pipeline mappings in distributed environments as optimization problems and categorize them into six classes with different optimization goals and mapping constraints: (1) Minimum Endtoend Delay with No Node Reuse (MEDNNR), (2) Minimum Endtoend Delay with Contiguous Node Reuse (MEDCNR), (3) Minimum Endtoend Delay with Arbitrary Node Reuse (MEDANR), (4) Maximum Frame Rate with No Node Reuse or Share (MFRNNRS), (5) Maximum Frame Rate with Contiguous Node Reuse and Share (MFRCNRS), and (6) Maximum Frame Rate with Arbitrary Node Reuse and Share (MFRANRS). Here, 'contiguous node reuse' means that multiple contiguous modules along the pipeline may run on the same node and 'arbitrary node reuse' imposes no restriction on node reuse. Note that in interactive applications, a node can be reused but its resource is not shared. We prove that MEDANR is polynomially solvable and the rest are NP-complete. MEDANR, where either
NASA Astrophysics Data System (ADS)
Walker, D. N.; Fernsler, R. F.; Blackwell, D. D.; Amatucci, W. E.
2008-12-01
In earlier work, using a network analyzer, it was shown that collisionless resistance (CR) exists in the sheath of a spherical probe when driven by a small rf signal. The CR is inversely proportional to the plasma density gradient at the location where the applied angular frequency equals the plasma frequency ωpe. Recently, efforts have concentrated on a study of the low-to-intermediate frequency response of the probe to the rf signal. At sufficiently low frequencies, the CR is beyond cutoff, i.e., below the plasma frequency at the surface of the probe. Since the electron density at the probe surface decreases as a function of applied (negative) bias, the CR will extend to lower frequencies as the magnitude of negative bias increases. Therefore to eliminate both CR and ion current contributions, the frequencies presently being considered are much greater than the ion plasma frequency, ωpi, but less than the plasma frequency, ωpe(r0), where r0 is the probe radius. It is shown that, in this frequency regime, the complex impedance measurements made with a network analyzer can be used to determine electron temperature. An overview of the theory is presented along with comparisons to data sets made using three stainless steel spherical probes of different sizes in different experimental environments and different plasma parameter regimes. The temperature measurements made by this method are compared to those made by conventional Langmuir probe sweeps; the method shown here requires no curve fitting as is the usual procedure with Langmuir probes when a Maxwell-Boltzmann electron distribution is assumed. The new method requires, however, a solution of the Poisson equation to determine the approximate sheath dimensions and integrals to determine approximate plasma and sheath inductances. The solution relies on the calculation of impedance for a spherical probe immersed in a collisionless plasma and is based on a simple circuit analogy for the plasma. Finally, the
Krishnan, Kartik G; Müller, Adolf; Hong, Bujung; Potapov, Alexander A; Schackert, Gabriele; Seifert, Volker; Krauss, Joachim K
2012-03-01
Wound-healing problems in the neurosurgical patient can be particularly bothersome, owing to various specific risk factors involved. These may vary from simple wound dehiscence to complex multi-layer defects with cerebrospinal fluid (CSF) leakage and contamination. The latter is quite rare in practice and requires an individually titrated reconstruction strategy. The objective is to retrospectively analyze neurosurgical patients with complex, recalcitrant wound-healing problems we had treated in our department, attempt to develop a grading system based on the risk factors specific to our specialty and adapt a surgical reconstruction algorithm. During an 11-year period, 49 patients were identified to have had complex, recalcitrant wound-healing problems involving the cranial vault (n = 43) and the skull base (n = 6) that required an adapted surgical wound-management strategy. The etiologies of wound healing problems were aftermaths of surgical treatment of: (1) brain tumors (nine cases), (2) aneurysm clipping (ten cases), (3) trauma (27 patients), and (4) congenital malformations (three patients). Local rotational advancement flaps were performed in 18 patients and free microvascular tissue transfer was performed in 37 cases. Major risk factors leading to recalcitrant wound healing problems in the presented group were: prolonged angiographic interventions (20%), ongoing chemotherapy or radiotherapy (47%), prolonged cortisone application (51%), CSF leak (76%) and, above all, multiple failed attempts at wound closure (94%). Stable long-term wound healing was achieved in all patients using vascularized tissue coverage. A ternary grading system was developed based on various risk factors in the presented cohort. Accordingly, the algorithm for reconstruction in neurosurgical patients was adapted. Primary disease, treatment history, and distorted anatomical structures are major concerns in the management of complex wound-healing problems in neurosurgical patients
Understanding the determinants of problem-solving behavior in a complex environment
NASA Technical Reports Server (NTRS)
Casner, Stephen A.
1994-01-01
It is often argued that problem-solving behavior in a complex environment is determined as much by the features of the environment as by the goals of the problem solver. This article explores a technique to determine the extent to which measured features of a complex environment influence problem-solving behavior observed within that environment. In this study, the technique is used to determine how complex flight deck and air traffic control environment influences the strategies used by airline pilots when controlling the flight path of a modern jetliner. Data collected aboard 16 commercial flights are used to measure selected features of the task environment. A record of the pilots' problem-solving behavior is analyzed to determine to what extent behavior is adapted to the environmental features that were measured. The results suggest that the measured features of the environment account for as much as half of the variability in the pilots' problem-solving behavior and provide estimates on the probable effects of each environmental feature.
Understanding the determinants of problem-solving behavior in a complex environment
NASA Technical Reports Server (NTRS)
Casner, Stephen A.
1994-01-01
It is often argued that problem-solving behavior in a complex environment is determined as much by the features of the environment as by the goals of the problem solver. This article explores a technique to determine the extent to which measured features of a complex environment influence problem-solving behavior observed within that environment. In this study, the technique is used to determine how complex flight deck and air traffic control environment influences the strategies used by airline pilots when controlling the flight path of a modern jetliner. Data collected aboard 16 commercial flights are used to measure selected features of the task environment. A record of the pilots' problem-solving behavior is analyzed to determine to what extent behavior is adapted to the environmental features that were measured. The results suggest that the measured features of the environment account for as much as half of the variability in the pilots' problem-solving behavior and provide estimates on the probable effects of each environmental feature.
On the problem of analyzing the dynamic properties of a layered half-space
NASA Astrophysics Data System (ADS)
Belyankova, T. I.; Kalinchuk, V. V.
2014-09-01
An efficient method is developed for constructing the Green matrix functions of a layered inhomogeneous half-space. Matrix formulas convenient for programming are proposed, which make it possible to study the properties of a multilayered half-space with high accuracy. As an example of the problem of oscillations of a three-layer half-space, transformation of the dispersion characteristics of a three-layered medium is shown as a function of the relations of the mechanical and geometric parameters of its components. Study of the properties of the Green's function of a medium with a low-velocity layered inclusion showed that each mode of a surface wave exists in a limited frequency range: in addition to the critical frequency of mode occurrence, the frequency of its disappearance exists—a frequency above which the mode is suppressed because of superposition of the zero of the Green's function on its pole. A similar study conducted for a medium with a high-velocity layered inclusion has shown that in addition to the cutoff frequency (the frequency at which a surface wave propagating in the low-frequency range disappears), there is the frequency of its recurrent generation—the upper boundary of the "cutoff range" of the first mode. Beyond this range, the first mode propagates, and also the other propagating modes can appear. The critical relation of the geometric parameters of the medium determining the existence and boundaries of the cutoff range of a wave is established.
[Self-neglect in older adults--a complex problem].
Reesink, Fransje E; Boelaarts, Leo; Weinstein, Henry C
2009-01-01
A 76-year-old man presented at the emergency department with functional decline and extreme self-neglect. He died after a few days. The probable cause of death was pneumonia. His family consented to autopsy. Surprisingly, the neuropathological findings showed a tauopathy consistent with fronto-temporal dementia. Self-neglect in the elderly is a common and complex problem associated with high mortality and morbidity. This syndrome requires a thorough workup to detect possible causes. The most common etiologies are neurodegenerative disorders, psychiatric illness and alcohol abuse. It is important to elucidate the cause of self-neglect in order to give the proper treatment and support to the patient and family.
Human opinion dynamics: An inspiration to solve complex optimization problems
Kaur, Rishemjit; Kumar, Ritesh; Bhondekar, Amol P.; Kapur, Pawan
2013-01-01
Human interactions give rise to the formation of different kinds of opinions in a society. The study of formations and dynamics of opinions has been one of the most important areas in social physics. The opinion dynamics and associated social structure leads to decision making or so called opinion consensus. Opinion formation is a process of collective intelligence evolving from the integrative tendencies of social influence with the disintegrative effects of individualisation, and therefore could be exploited for developing search strategies. Here, we demonstrate that human opinion dynamics can be utilised to solve complex mathematical optimization problems. The results have been compared with a standard algorithm inspired from bird flocking behaviour and the comparison proves the efficacy of the proposed approach in general. Our investigation may open new avenues towards understanding the collective decision making. PMID:24141795
Beeler, Cheryl K.; Antes, Alison L.; Wang, Xiaoqian; Caughron, Jared J.; Thiel, Chase E.; Mumford, Michael D.
2010-01-01
This study examined the role of key causal analysis strategies in forecasting and ethical decision-making. Undergraduate participants took on the role of the key actor in several ethical problems and were asked to identify and analyze the causes, forecast potential outcomes, and make a decision about each problem. Time pressure and analytic mindset were manipulated while participants worked through these problems. The results indicated that forecast quality was associated with decision ethicality, and the identification of the critical causes of the problem was associated with both higher quality forecasts and higher ethicality of decisions. Neither time pressure nor analytic mindset impacted forecasts or ethicality of decisions. Theoretical and practical implications of these findings are discussed. PMID:20352056
Applied social and behavioral science to address complex health problems.
Livingood, William C; Allegrante, John P; Airhihenbuwa, Collins O; Clark, Noreen M; Windsor, Richard C; Zimmerman, Marc A; Green, Lawrence W
2011-11-01
Complex and dynamic societal factors continue to challenge the capacity of the social and behavioral sciences in preventive medicine and public health to overcome the most seemingly intractable health problems. This paper proposes a fundamental shift from a research approach that presumes to identify (from highly controlled trials) universally applicable interventions expected to be implemented "with fidelity" by practitioners, to an applied social and behavioral science approach similar to that of engineering. Such a shift would build on and complement the recent recommendations of the NIH Office of Behavioral and Social Science Research and require reformulation of the research-practice dichotomy. It would also require disciplines now engaged in preventive medicine and public health practice to develop a better understanding of systems thinking and the science of application that is sensitive to the complexity, interactivity, and unique elements of community and practice settings. Also needed is a modification of health-related education to ensure that those entering the disciplines develop instincts and capacities as applied scientists.
Sporothrix schenckii complex and sporotrichosis, an emerging health problem.
López-Romero, Everardo; Reyes-Montes, María del Rocío; Pérez-Torres, Armando; Ruiz-Baca, Estela; Villagómez-Castro, Julio C; Mora-Montes, Héctor M; Flores-Carreón, Arturo; Toriello, Conchita
2011-01-01
Sporothrix schenckii, now named the S. schenckii species complex, has largely been known as the etiological agent of sporotrichosis, which is an acute or chronic subcutaneous mycosis of humans and other mammals. Gene sequencing has revealed the following species in the S. schenckii complex: Sporothrix albicans, Sporothrix brasiliensis, Sporothrix globosa, Sporothrix luriei, Sporothrix mexicana and S. schenckii. The increasing number of reports of Sporothrix infection in immunocompromised patients, mainly the HIV-infected population, suggests sporotrichosis as an emerging global health problem concomitant with the AIDS pandemic. Molecular studies have demonstrated a high level of intraspecific variability. Components of the S. schenckii cell wall that act as adhesins and immunogenic inducers, such as a 70-kDa glycoprotein, are apparently specific to this fungus. The main glycan peptidorhamnomannan cell wall component is the only O-linked glycan structure known in S. schenckii. It contains an α-mannobiose core followed by one α-glucuronic acid unit, which may be mono- or di-rhamnosylated. The oligomeric structure of glucosamine-6-P synthase has led to a significant advance in the development of antifungals targeted to the enzyme's catalytic domain in S. schenckii.
Deep graphs—A general framework to represent and analyze heterogeneous complex systems across scales
NASA Astrophysics Data System (ADS)
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of
NASA Technical Reports Server (NTRS)
Huerre, P.; Karamcheti, K.
1976-01-01
The theory of sound propagation is examined in a viscous, heat-conducting fluid, initially at rest and in a uniform state, and contained in a rigid, impermeable duct with isothermal walls. Topics covered include: (1) theoretical formulation of the small amplitude fluctuating motions of a viscous, heat-conducting and compressible fluid; (2) sound propagation in a two dimensional duct; and (3) perturbation study of the inplane modes.
Astill, Rebecca G; Van der Heijden, Kristiaan B; Van Ijzendoorn, Marinus H; Van Someren, Eus J W
2012-11-01
Clear associations of sleep, cognitive performance, and behavioral problems have been demonstrated in meta-analyses of studies in adults. This meta-analysis is the first to systematically summarize all relevant studies reporting on sleep, cognition, and behavioral problems in healthy school-age children (5-12 years old) and incorporates 86 studies on 35,936 children. Sleep duration shows a significant positive relation with cognitive performance (r = .08, confidence interval [CI] [.06, .10]). Subsequent analyses on cognitive subdomains indicate specific associations of sleep duration with executive functioning (r = .07, CI [.02, .13]), with performance on tasks that address multiple cognitive domains (r = .10, CI = [.05, .16]), and with school performance (r = .09, CI [.06, .12]), but not with intelligence. Quite unlike typical findings in adults, sleep duration was not associated with sustained attention and memory. Methodological issues and brain developmental immaturities are proposed to underlie the marked differences. Shorter sleep duration is associated with more behavioral problems (r = .09, CI [.07, .11]). Subsequent analyses on subdomains of behavioral problems showed that the relation holds for both internalizing (r = .09, CI [.06, .12]) and externalizing behavioral problems (r = .08, CI [.06, .11]). Ancillary moderator analyses identified practices recommended to increase sensitivity of assessments and designs in future studies. In practical terms, the findings suggest that insufficient sleep in children is associated with deficits in higher-order and complex cognitive functions and an increase in behavioral problems. This is particularly relevant given society's tendency towards sleep curtailment.
ERIC Educational Resources Information Center
Leppavirta, J.; Kettunen, H.; Sihvola, A.
2011-01-01
Complex multistep problem exercises are one way to enhance engineering students' learning of electromagnetics (EM). This study investigates whether exposure to complex problem exercises during an introductory EM course improves students' conceptual and procedural knowledge. The performance in complex problem exercises is compared to prior success…
ERIC Educational Resources Information Center
Leppavirta, J.; Kettunen, H.; Sihvola, A.
2011-01-01
Complex multistep problem exercises are one way to enhance engineering students' learning of electromagnetics (EM). This study investigates whether exposure to complex problem exercises during an introductory EM course improves students' conceptual and procedural knowledge. The performance in complex problem exercises is compared to prior success…
[Methamphetamine - just another stimulant or a more complex problem?].
Lecomte, Tania; Massé, Marjolaine
2014-01-01
Methamphetamine (MA) has recently become very popular in the media, due in part to its increasing popularity as well as its psychotropic effects and the negative consequences of its use. Is it a stimulant like any other, or does methamphetamine use lead to specific difficulties in its users? The aim of this article is to provide a brief review of the literature by explaining some of the reasons for its popularity in Canada as well as the physical, dental, psychiatric, cognitive and legal problems associated with its use. MA's popularity: Regarding its popularity, MA has benefitted from multiple factors, namely its low cost for users and manufacturers, its quick and intense psychotropic effects (increased energy, sexual arousal, rapid thinking, sleeplessness, lack of appetite), its easy access, as well as its various methods of ingestion (nasal, oral, injection). MA abuse also results in a multitude of negative effects, both physical and mental. MA's physical effects: In terms of negative physical effects, cardiac problems, skin infections, sexually transmitted (and injection-related) diseases as well as meth mouth are described. MA's mental effects: In terms of mental consequences, two recently published Canadian studies revealing high rates of depression symptoms and of sustained psychotic symptoms in a subgroup of MA users are presented. Studies reporting various cognitive deficits in MA user are also reviewed, including reports of high prevalence of childhood attention deficit and hyperactivity disorder diagnoses among adult MA users. Furthermore, MA abusers are documented as having been highly exposed to trauma in their lives, with many presenting with post-traumatic stress disorder criteria. This manuscript also explores the reasons behind the forensic profiles of individuals using MA, particularly the increased tendency toward violent acts, the high incarceration rates of the homeless users and the high percentage of individuals diagnosed with antisocial
Campbell, James R; Xu, Junchuan; Fung, Kin Wah
2011-01-01
We analyzed 598 of 63,952 terms employed in problem list entries from seven major healthcare institutions that were not mapped with UMLS to SNOMED CT when preparing the NLM UMLS-CORE problem list subset. We intended to determine whether published or post-coordinated SNOMED concepts could accurately capture the problems as stated by the clinician and to characterize the workload for the local terminology manager. From the terms we analyzed, we estimate that 7.5% of the total terms represent ambiguous statements that require clarification. Of those terms which were unambiguous, we estimate that 38.1% could be encoded using the SNOMED CT January 2011 pre-coordinated (published core) content. 60.4% of unambiguous terms required post-coordination to capture the term meaning within the SNOMED model. Approximately 28.5% of post-coordinated content could not be fully defined and required primitive forms. This left 1.5% of unambiguous terms which were expressed with meaning which could not be represented in SNOMED CT. We estimate from our study that 98.5% of clinical terms unambiguously suggested for the problem list can be equated to published concepts or can be modeled with SNOMED CT but that roughly one in four SNOMED modeled expressions fail to represent the full meaning of the term. Implications for the business model of the local terminology manager and the development of SNOMED CT are discussed.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied
ERIC Educational Resources Information Center
Blackburn, J. Joey; Robinson, J. Shane; Lamm, Alexa J.
2014-01-01
The purpose of this experimental study was to determine the effects of cognitive style and problem complexity on Oklahoma State University preservice agriculture teachers' (N = 56) ability to solve problems in small gasoline engines. Time to solution was operationalized as problem solving ability. Kirton's Adaption-Innovation Inventory was…
Postoperative nausea and vomiting: A simple yet complex problem
Shaikh, Safiya Imtiaz; Nagarekha, D.; Hegade, Ganapati; Marutheesh, M.
2016-01-01
Postoperative nausea and vomiting (PONV) is one of the complex and significant problems in anesthesia practice, with growing trend toward ambulatory and day care surgeries. This review focuses on pathophysiology, pharmacological prophylaxis, and rescue therapy for PONV. We searched the Medline and PubMed database for articles published in English from 1991 to 2014 while writing this review using “postoperative nausea and vomiting, PONV, nausea-vomiting, PONV prophylaxis, and rescue” as keywords. PONV is influenced by multiple factors which are related to the patient, surgery, and pre-, intra-, and post-operative anesthesia factors. The risk of PONV can be assessed using a scoring system such as Apfel simplified scoring system which is based on four independent risk predictors. PONV prophylaxis is administered to patients with medium and high risks based on this scoring system. Newer drugs such as neurokinin-1 receptor antagonist (aprepitant) are used along with serotonin (5-hydroxytryptamine subtype 3) receptor antagonist, corticosteroids, anticholinergics, antihistaminics, and butyrophenones for PONV prophylaxis. Combination of drugs from different classes with different mechanism of action are administered for optimized efficacy in adults with moderate risk for PONV. Multimodal approach with combination of pharmacological and nonpharmacological prophylaxis along with interventions that reduce baseline risk is employed in patients with high PONV risk. PMID:27746521
Abu-Asab, Mones; Koithan, Mary; Shaver, Joan; Amri, Hakima
2012-01-01
Summary Systems biology offers cutting-edge tools for the study of complementary and alternative medicine (CAM). The advent of ‘omics’ techniques and the resulting avalanche of scientific data have introduced an unprecedented level of complexity and heterogeneous data to biomedical research, leading to the development of novel research approaches. Statistical averaging has its limitations and is unsuitable for the analysis of heterogeneity, as it masks diversity by homogenizing otherwise heterogeneous populations. Unfortunately, most researchers are unaware of alternative methods of analysis capable of accounting for individual variability. This paper describes a systems biology solution to data complexity through the application of parsimony phylogenetic analysis. Maximum parsimony (MP) provides a data-based modeling paradigm that will permit a priori stratification of the study cohort(s), better assessment of early diagnosis, prognosis, and treatment efficacy within each stratum, and a method that could be used to explore, identify and describe complex human patterning. PMID:22327551
Complex Problem Solving: What It Is and What It Is Not
Dörner, Dietrich; Funke, Joachim
2017-01-01
Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems. PMID:28744242
Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger
2014-01-01
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics. PMID:24950184
Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger
2014-01-01
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics.
Using New Models to Analyze Complex Regularities of the World: Commentary on Musso et al. (2013)
ERIC Educational Resources Information Center
Nokelainen, Petri; Silander, Tomi
2014-01-01
This commentary to the recent article by Musso et al. (2013) discusses issues related to model fitting, comparison of classification accuracy of generative and discriminative models, and two (or more) cultures of data modeling. We start by questioning the extremely high classification accuracy with an empirical data from a complex domain. There is…
An Effective Methodology for Processing and Analyzing Large, Complex Spacecraft Data Streams
ERIC Educational Resources Information Center
Teymourlouei, Haydar
2013-01-01
The emerging large datasets have made efficient data processing a much more difficult task for the traditional methodologies. Invariably, datasets continue to increase rapidly in size with time. The purpose of this research is to give an overview of some of the tools and techniques that can be utilized to manage and analyze large datasets. We…
An Effective Methodology for Processing and Analyzing Large, Complex Spacecraft Data Streams
ERIC Educational Resources Information Center
Teymourlouei, Haydar
2013-01-01
The emerging large datasets have made efficient data processing a much more difficult task for the traditional methodologies. Invariably, datasets continue to increase rapidly in size with time. The purpose of this research is to give an overview of some of the tools and techniques that can be utilized to manage and analyze large datasets. We…
The Influence of Prior Experience and Process Utilization in Solving Complex Problems.
ERIC Educational Resources Information Center
Sterner, Paula; Wedman, John
By using ill-structured problems and examining problem- solving processes, this study was conducted to explore the nature of solving complex, multistep problems, focusing on how prior knowledge, problem-solving process utilization, and analogical problem solving are related to success. Twenty-four college students qualified to participate by…
Eye-Tracking Study of Complexity in Gas Law Problems
ERIC Educational Resources Information Center
Tang, Hui; Pienta, Norbert
2012-01-01
This study, part of a series investigating students' use of online tools to assess problem solving, uses eye-tracking hardware and software to explore the effect of problem difficulty and cognitive processes when students solve gas law word problems. Eye movements are indices of cognition; eye-tracking data typically include the location,…
Eye-Tracking Study of Complexity in Gas Law Problems
ERIC Educational Resources Information Center
Tang, Hui; Pienta, Norbert
2012-01-01
This study, part of a series investigating students' use of online tools to assess problem solving, uses eye-tracking hardware and software to explore the effect of problem difficulty and cognitive processes when students solve gas law word problems. Eye movements are indices of cognition; eye-tracking data typically include the location,…
Analyzing complex wake-terrain interactions and its implications on wind-farm performance.
NASA Astrophysics Data System (ADS)
Tabib, Mandar; Rasheed, Adil; Fuchs, Franz
2016-09-01
Rotating wind turbine blades generate complex wakes involving vortices (helical tip-vortex, root-vortex etc.).These wakes are regions of high velocity deficits and high turbulence intensities and they tend to degrade the performance of down-stream turbines. Hence, a conservative inter-turbine distance of up-to 10 times turbine diameter (10D) is sometimes used in wind-farm layout (particularly in cases of flat terrain). This ensures that wake-effects will not reduce the overall wind-farm performance, but this leads to larger land footprint for establishing a wind-farm. In-case of complex-terrain, within a short distance (say 10D) itself, the nearby terrain can rise in altitude and be high enough to influence the wake dynamics. This wake-terrain interaction can happen either (a) indirectly, through an interaction of wake (both near tip vortex and far wake large-scale vortex) with terrain induced turbulence (especially, smaller eddies generated by small ridges within the terrain) or (b) directly, by obstructing the wake-region partially or fully in its flow-path. Hence, enhanced understanding of wake- development due to wake-terrain interaction will help in wind farm design. To this end the current study involves: (1) understanding the numerics for successful simulation of vortices, (2) understanding fundamental vortex-terrain interaction mechanism through studies devoted to interaction of a single vortex with different terrains, (3) relating influence of vortex-terrain interactions to performance of a wind-farm by studying a multi-turbine wind-farm layout under different terrains. The results on interaction of terrain and vortex has shown a much faster decay of vortex for complex terrain compared to a flatter-terrain. The potential reasons identified explaining the observation are (a) formation of secondary vortices in flow and its interaction with the primary vortex and (b) enhanced vorticity diffusion due to increased terrain-induced turbulence. The implications of
NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Yuan, Naiming; Fu, Zuntao; Zhang, Huan; Piao, Lin; Xoplaki, Elena; Luterbacher, Juerg
2015-01-01
In this paper, a new method, detrended partial-cross-correlation analysis (DPCCA), is proposed. Based on detrended cross-correlation analysis (DCCA), this method is improved by including partial-correlation technique, which can be applied to quantify the relations of two non-stationary signals (with influences of other signals removed) on different time scales. We illustrate the advantages of this method by performing two numerical tests. Test I shows the advantages of DPCCA in handling non-stationary signals, while Test II reveals the “intrinsic” relations between two considered time series with potential influences of other unconsidered signals removed. To further show the utility of DPCCA in natural complex systems, we provide new evidence on the winter-time Pacific Decadal Oscillation (PDO) and the winter-time Nino3 Sea Surface Temperature Anomaly (Nino3-SSTA) affecting the Summer Rainfall over the middle-lower reaches of the Yangtze River (SRYR). By applying DPCCA, better significant correlations between SRYR and Nino3-SSTA on time scales of 6 ~ 8 years are found over the period 1951 ~ 2012, while significant correlations between SRYR and PDO on time scales of 35 years arise. With these physically explainable results, we have confidence that DPCCA is an useful method in addressing complex systems. PMID:25634341
Mass spectrometric methods to analyze the structural organization of macromolecular complexes.
Rajabi, Khadijeh; Ashcroft, Alison E; Radford, Sheena E
2015-11-01
With the development of soft ionization techniques such as electrospray ionization (ESI), mass spectrometry (MS) has found widespread application in structural biology. The ability to transfer large biomolecular complexes intact into the gas-phase, combined with the low sample consumption and high sensitivity of MS, has made ESI-MS a method of choice for the characterization of macromolecules. This paper describes the application of MS to study large non-covalent complexes. We categorize the available techniques in two groups. First, solution-based techniques in which the biomolecules are labeled in solution and subsequently characterized by MS. Three MS-based techniques are discussed, namely hydroxyl radical footprinting, cross-linking and hydrogen/deuterium exchange (HDX) MS. In the second group, MS-based techniques to probe intact biomolecules in the gas-phase, e.g. side-chain microsolvation, HDX and ion mobility spectrometry are discussed. Together, the approaches place MS as a powerful methodology for an ever growing plethora of structural applications.
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno
2013-01-01
Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno
2013-01-01
Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…
2007-07-01
Forrest, et al. in [FoH00, HoF00], Harmar, et al. in [HaL00, HaW02], and Williams , et al. [WiA01]. The main difference is the use of RNA as a memory...and Robert Bennington. “Extending the Schematic Protection Model to Verify the Safety of a System Using Attack and Protection Trees,” Under review...by IEEE Transactions on Information Forensics and Security. Kenneth Edge, George Dalton, Richard Raines, and Robert Mills. “Analyzing Network
ERIC Educational Resources Information Center
Tang, Hui; Kirk, John; Pienta, Norbert J.
2014-01-01
This paper includes two experiments, one investigating complexity factors in stoichiometry word problems, and the other identifying students' problem-solving protocols by using eye-tracking technology. The word problems used in this study had five different complexity factors, which were randomly assigned by a Web-based tool that we developed. The…
ERIC Educational Resources Information Center
Tang, Hui; Kirk, John; Pienta, Norbert J.
2014-01-01
This paper includes two experiments, one investigating complexity factors in stoichiometry word problems, and the other identifying students' problem-solving protocols by using eye-tracking technology. The word problems used in this study had five different complexity factors, which were randomly assigned by a Web-based tool that we developed. The…
A note on the Dirichlet problem for model complex partial differential equations
NASA Astrophysics Data System (ADS)
Ashyralyev, Allaberen; Karaca, Bahriye
2016-08-01
Complex model partial differential equations of arbitrary order are considered. The uniqueness of the Dirichlet problem is studied. It is proved that the Dirichlet problem for higher order of complex partial differential equations with one complex variable has infinitely many solutions.
On the computational complexity of the reticulate cophylogeny reconstruction problem.
Libeskind-Hadas, Ran; Charleston, Michael A
2009-01-01
The cophylogeny reconstruction problem is that of finding minimal cost explanations of differences between evolutionary histories of ecologically linked groups of biological organisms. We present a proof that shows that the general problem of reconciling evolutionary histories is NP-complete and provide a sharp boundary where this intractability begins. We also show that a related problem, that of finding Pareto optimal solutions, is NP-hard. As a byproduct of our results, we give a framework by which meta-heuristics can be applied to find good solutions to this problem.
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
Technologically Mediated Complex Problem-Solving on a Statistics Task
ERIC Educational Resources Information Center
Scanlon, Eileen; Blake, Canan; Joiner, Richard; O'Shea, Tim
2005-01-01
Simulations on computers can allow many experiments to be conducted quickly to help students develop an understanding of statistical topics. We used a simulation of a challenging problem in statistics as the focus of an exploration of situations where members of a problem-solving group are physically separated then reconnected via combinations of…
THE ROLE OF PROBLEM SOLVING IN COMPLEX INTRAVERBAL REPERTOIRES
Sautter, Rachael A; LeBlanc, Linda A; Jay, Allison A; Goldsmith, Tina R; Carr, James E
2011-01-01
We examined whether typically developing preschoolers could learn to use a problem-solving strategy that involved self-prompting with intraverbal chains to provide multiple responses to intraverbal categorization questions. Teaching the children to use the problem-solving strategy did not produce significant increases in target responses until problem solving was modeled and prompted. Following the model and prompts, all participants showed immediate significant increases in intraverbal categorization, and all prompts were quickly eliminated. Use of audible self-prompts was evident initially for all participants, but declined over time for 3 of the 4 children. Within-session response patterns remained consistent with use of the problem-solving strategy even when self-prompts were not audible. These findings suggest that teaching and prompting a problem-solving strategy can be an effective way to produce intraverbal categorization responses. PMID:21709781
West, Stephen G
2016-01-01
Should low-achieving students be promoted to the next grade or be retained (held back) in the prior grade? This special section presents a discussion of the application of marginal structural models to the challenging problem of estimating the effect of promotion versus retention in grade on math scores in elementary school. Vandecandelaere, De Fraine, Van Damme, and Vansteelandt provide a didactic presentation of the marginal structural modeling approach, noting retention is a time-varying treatment because promoted low-achieving students may be retained in a subsequent grade. Steiner, Park, and Kim's commentary presents a detailed analysis of the treatment effects being estimated in same-age versus same-grade comparisons from the perspective of the potential outcomes model. Reshetnyak, Cham, and Kim's commentary clarifies the conditions under which same-age versus same-grade comparisons might be preferred; they also identify methods of further improving the estimation of retention effects. In their rejoinder, Vandecandelaere and Vansteelandt discuss tradeoffs in comparing the promoted and retained groups and highlight sensitivity analysis as a method of probing the robustness of treatment effect estimates. Our hope is that this combined didactic presentation and critical evaluation will encourage researchers to add marginal structural models to their methodological toolkits.
A Complex Network Model for Analyzing Railway Accidents Based on the Maximal Information Coefficient
NASA Astrophysics Data System (ADS)
Shao, Fu-Bo; Li, Ke-Ping
2016-10-01
It is an important issue to identify important influencing factors in railway accident analysis. In this paper, employing the good measure of dependence for two-variable relationships, the maximal information coefficient (MIC), which can capture a wide range of associations, a complex network model for railway accident analysis is designed in which nodes denote factors of railway accidents and edges are generated between two factors of which MIC values are larger than or equal to the dependent criterion. The variety of network structure is studied. As the increasing of the dependent criterion, the network becomes to an approximate scale-free network. Moreover, employing the proposed network, important influencing factors are identified. And we find that the annual track density-gross tonnage factor is an important factor which is a cut vertex when the dependent criterion is equal to 0.3. From the network, it is found that the railway development is unbalanced for different states which is consistent with the fact. Supported by the Fundamental Research Funds for the Central Universities under Grant No. 2016YJS087, the National Natural Science Foundation of China under Grant No. U1434209, and the Research Foundation of State Key Laboratory of Railway Traffic Control and Safety, Beijing Jiaotong University under Grant No. RCS2016ZJ001
Nonprotein Based Enrichment Method to Analyze Peptide Cross-Linking in Protein Complexes
Yan, Funing; Che, Fa-Yun; Rykunov, Dmitry; Nieves, Edward; Fiser, Andras; Weiss, Louis M.; Angeletti, Ruth Hogue
2009-01-01
Cross-linking analysis of protein complexes and structures by tandem mass spectrometry (MS/MS) has advantages in speed, sensitivity, specificity, and the capability of handling complicated protein assemblies. However, detection and accurate assignment of the cross-linked peptides are often challenging due to their low abundance and complicated fragmentation behavior in collision-induced dissociation (CID). To simplify the MS analysis and improve the signal-to-noise ratio of the cross-linked peptides, we developed a novel peptide enrichment strategy that utilizes a cross-linker with a cryptic thiol group and using beads modified with a photocleavable cross-linker. The functional cross-linkers were designed to react with the primary amino groups in proteins. Human serum albumin was used as a model protein to detect intra- and intermolecular cross-linkages. Use of this protein-free selective retrieval method eliminates the contamination that can result from avidin–biotin based retrieval systems and simplifies data analysis. These features may make the method suitable to investigate protein–protein interactions in biological samples. PMID:19642656
Solar optical codes evaluation for modeling and analyzing complex solar receiver geometries
NASA Astrophysics Data System (ADS)
Yellowhair, Julius; Ortega, Jesus D.; Christian, Joshua M.; Ho, Clifford K.
2014-09-01
Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of the incident irradiance profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to intersect the reflected solar rays with the receiver surfaces and construct irradiance profiles. We found the ray-tracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries such as flat plate, cylinder, and cavity receivers. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs. We provide an example of using SolTrace for modeling nonconventional receiver geometries. The goal is to transfer the irradiance profiles on the receiver surfaces calculated in an optical code to a computational fluid dynamics code such as ANSYS Fluent. This approach eliminates the need for using discrete ordinance or discrete radiation transfer models, which are computationally intensive, within the CFD code. The irradiance profiles on the receiver surfaces then allows for thermal and fluid analysis on the receiver.
Operational Reconnaissance: Identifying the Right Problems in a Complex World
2015-05-23
following simple and uncoordinated strategies can produce aggregate behavior that is complex and ordered, although not necessarily predictable and...individual actors following simple and uncoordinated strategies can produce aggregate behavior that is complex and ordered, although not necessarily...the larger offensive under the 3rd Byelorussian Army Group. The operational commander in this scenario was Colonel General Ivan Kanilovich
Analyzing complex FTMS simulations: a case study in high-level visualization of ion motions.
Burakiewicz, Wojciech; van Liere, Robert
2006-01-01
Current practice in particle visualization renders particle position data directly onto the screen as points or glyphs. Using a camera placed at a fixed position, particle motions can be visualized by rendering trajectories or by animations. Applying such direct techniques to large, time dependent particle data sets often results in cluttered images in which the dynamic properties of the underlying system are difficult to interpret. In this case study we take an alternative approach to the visualization of ion motions. Instead of rendering ion position data directly, we first extract meaningful motion information from the ion position data and then map this information onto geometric primitives. Our goal is to produce high-level visualizations that reflect the physicists' way of thinking about ion dynamics. Parameterized geometric icons are defined to encode motion information of clusters of related ions. In addition, a parameterized camera control mechanism is used to analyze relative instead of only absolute ion motions. We apply the techniques to simulations of Fourier transform mass spectrometry (FTMS) experiments. The data produced by such simulations can amount to 5 10(4) ions and 10(5) timesteps. This paper discusses the requirements, design and informal evaluation of the implemented system.
NASA Astrophysics Data System (ADS)
Zhang, Yuchen; Kumari, Sudesh; Schmidt, Michael W.; Gordon, Mark S.; Yang, Dong-Sheng
2017-06-01
Ce(C_{2}H_{2}) and Ce(C_{4}H_{6}) are produced by the Ce-mediated ethylene activation and investigated by mass-analyzed threshold ionization (MATI) spectroscopy, isotopic substitutions, and relativistic quantum chemical computations. The MATI spectrum of Ce(C_{2}H_{2}) exhibits two nearly identical band systems separated by 128 \\wn, and that of Ce(C_{4}H_{6}) shows three similar band systems separated by 55 and 105 \\wn. These separations are not affected by deuteration. The observed band systems for the two Ce-hydrocarbon species are attributed to the spin-orbit splitting arising from interactions of triplet and singlet states. Ce(C_{2}H_{2}) is a metallacyclopropene in C_{2v} symmetry, and Ce(C_{4}H_{6}) is a metallacyclopentene in C_{s} symmetry. The low-energy valence electron configurations of the neutral and ionic states of each species are Ce 4f^{1}6s^{1} and Ce 4f^{1}, respectively. The remaining two electrons that are associated with the isolated Ce atom or ion are spin paired in a molecular orbital that is a bonding combination between a Ce 5d orbital and a hydrocarbon π* antibonding orbital.
Problems in processing multizonal video information at specialized complexes
NASA Technical Reports Server (NTRS)
Shamis, V. A.
1979-01-01
Architectural requirements of a minicomputer-based specialized complex for automated digital analysis of multizonal video data are examined. The logic structure of multizonal video data and the complex mathematical provision required for the analysis of such data are described. The composition of the specialized complex, its operating system, and the required set of peripheral devices are discussed. It is noted that although much of the analysis can be automated, the operator-computer dialog mode is essential for certain stages of the analysis.
The Fallacy of Univariate Solutions to Complex Systems Problems
Lessov-Schlaggar, Christina N.; Rubin, Joshua B.; Schlaggar, Bradley L.
2016-01-01
Complex biological systems, by definition, are composed of multiple components that interact non-linearly. The human brain constitutes, arguably, the most complex biological system known. Yet most investigation of the brain and its function is carried out using assumptions appropriate for simple systems—univariate design and linear statistical approaches. This heuristic must change before we can hope to discover and test interventions to improve the lives of individuals with complex disorders of brain development and function. Indeed, a movement away from simplistic models of biological systems will benefit essentially all domains of biology and medicine. The present brief essay lays the foundation for this argument. PMID:27375425
ADHD treatments, sleep, and sleep problems: complex associations.
Stein, Mark A; Weiss, Margaret; Hlavaty, Laura
2012-07-01
ADHD, sleep, and ADHD treatments are highly interrelated. In this review, we describe the effects of stimulants and non stimulant medications on sleep in children, adolescents, and adults with ADHD. Clinical predictors of sleep problems during pharmacotherapy include age, sleep problems prior to initiating treatment, and dose and dosing schedule. As yet, we have little understanding of the biological or genetic factors related to individual variation in drug response and sleep.
Asbestos quantification in track ballast, a complex analytical problem
NASA Astrophysics Data System (ADS)
Cavallo, Alessandro
2016-04-01
Track ballast forms the trackbeb upon which railroad ties are laid. It is used to bear the load from the railroad ties, to facilitate water drainage, and also to keep down vegetation. It is typically made of angular crushed stone, with a grain size between 30 and 60 mm, with good mechanical properties (high compressive strength, freeze - thaw resistance, resistance to fragmentation). The most common rock types are represented by basalts, porphyries, orthogneisses, some carbonatic rocks and "green stones" (serpentinites, prasinites, amphibolites, metagabbros). Especially "green stones" may contain traces, and sometimes appreciable amounts of asbestiform minerals (chrysotile and/or fibrous amphiboles, generally tremolite - actinolite). In Italy, the chrysotile asbestos mine in Balangero (Turin) produced over 5 Mt railroad ballast (crushed serpentinites), which was used for the railways in northern and central Italy, from 1930 up to 1990. In addition to Balangero, several other serpentinite and prasinite quarries (e.g. Emilia Romagna) provided the railways ballast up to the year 2000. The legal threshold for asbestos content in track ballast is established in 1000 ppm: if the value is below this threshold, the material can be reused, otherwise it must be disposed of as hazardous waste, with very high costs. The quantitative asbestos determination in rocks is a very complex analytical issue: although techniques like TEM-SAED and micro-Raman are very effective in the identification of asbestos minerals, a quantitative determination on bulk materials is almost impossible or really expensive and time consuming. Another problem is represented by the discrimination of asbestiform minerals (e.g. chrysotile, asbestiform amphiboles) from the common acicular - pseudo-fibrous varieties (lamellar serpentine minerals, prismatic/acicular amphiboles). In this work, more than 200 samples from the main Italian rail yards were characterized by a combined use of XRD and a special SEM
Complexity and algorithms for copy-number evolution problems.
El-Kebir, Mohammed; Raphael, Benjamin J; Shamir, Ron; Sharan, Roded; Zaccaria, Simone; Zehavi, Meirav; Zeira, Ron
2017-01-01
Cancer is an evolutionary process characterized by the accumulation of somatic mutations in a population of cells that form a tumor. One frequent type of mutations is copy number aberrations, which alter the number of copies of genomic regions. The number of copies of each position along a chromosome constitutes the chromosome's copy-number profile. Understanding how such profiles evolve in cancer can assist in both diagnosis and prognosis. We model the evolution of a tumor by segmental deletions and amplifications, and gauge distance from profile [Formula: see text] to [Formula: see text] by the minimum number of events needed to transform [Formula: see text] into [Formula: see text]. Given two profiles, our first problem aims to find a parental profile that minimizes the sum of distances to its children. Given k profiles, the second, more general problem, seeks a phylogenetic tree, whose k leaves are labeled by the k given profiles and whose internal vertices are labeled by ancestral profiles such that the sum of edge distances is minimum. For the former problem we give a pseudo-polynomial dynamic programming algorithm that is linear in the profile length, and an integer linear program formulation. For the latter problem we show it is NP-hard and give an integer linear program formulation that scales to practical problem instance sizes. We assess the efficiency and quality of our algorithms on simulated instances. https://github.com/raphael-group/CNT-ILP.
Generalized Householder transformations for the complex symmetric eigenvalue problem
NASA Astrophysics Data System (ADS)
Noble, J. H.; Lubasch, M.; Jentschura, U. D.
2013-08-01
We present an intuitive and scalable algorithm for the diagonalization of complex symmetric matrices, which arise from the projection of pseudo-Hermitian and complex scaled Hamiltonians onto a suitable basis set of "trial" states. The algorithm diagonalizes complex and symmetric (non-Hermitian) matrices and is easily implemented in modern computer languages. It is based on generalized Householder transformations and relies on iterative similarity transformations T → T' = Q T T Q, where Q is a complex and orthogonal, but not unitary, matrix, i.e. Q T = Q -1 but Q + ≠ Q -1. We present numerical reference data to support the scalability of the algorithm. We construct the generalized Householder transformations from the notion that the conserved scalar product of eigenstates Ψ n and Ψ m of a pseudo-Hermitian quantum mechanical Hamiltonian can be reformulated in terms of the generalized indefinite inner product ∫ d x Ψ n ( x, t) Ψ m ( x, t), where the integrand is locally defined, and complex conjugation is avoided. A few example calculations are described which illustrate the physical origin of the ideas used in the construction of the algorithm.
Problem analysis of geotechnical well drilling in complex environment
NASA Astrophysics Data System (ADS)
Kasenov, A. K.; Biletskiy, M. T.; Ratov, B. T.; Korotchenko, T. V.
2015-02-01
The article examines primary causes of problems occurring during the drilling of geotechnical wells (injection, production and monitoring wells) for in-situ leaching to extract uranium in South Kazakhstan. Such a drilling problem as hole caving which is basically caused by various chemical and physical factors (hydraulic, mechanical, etc.) has been thoroughly investigated. The analysis of packing causes has revealed that this problem usually occurs because of insufficient amount of drilling mud being associated with small cross section downward flow and relatively large cross section upward flow. This is explained by the fact that when spear bores are used to drill clay rocks, cutting size is usually rather big and there is a risk for clay particles to coagulate.
Problems and Prospects of the Medical University Complex
ERIC Educational Resources Information Center
Sidorov, P.; Viaz'min, A.
2006-01-01
In the past ten to fifteen years the problem of providing enough cadres for health care has become substantially worse. According to these authors, the primary reason that this has happened is that the system of mandatory job assignment of graduates of educational institutions has been abolished. The entire system of organizing cadre training, in…
Complexity of the Generalized Mover’s Problem.
1985-01-01
problem by workers in the robotics fields and in artificial intellegence , (for example [Nilson, 69], [Paul, 72], (Udupa, 77], [Widdoes, 74], [Lozano-Perez...Nilsson, "A mobile automation: An application of artificial intelligence techniques," Proceedings TJCAI-69, 509-520, 1969. . -7-- -17- C. O’Dunlaing, M
Computational neural networks driving complex analytical problem solving.
Hanrahan, Grady
2010-06-01
Neural network computing demonstrates advanced analytical problem solving abilities to meet the demands of modern chemical research. (To listen to a podcast about this article, please go to the Analytical Chemistry multimedia page at pubs.acs.org/page/ancham/audio/index.html .).
Navigating complex decision spaces: Problems and paradigms in sequential choice.
Walsh, Matthew M; Anderson, John R
2014-03-01
To behave adaptively, we must learn from the consequences of our actions. Doing so is difficult when the consequences of an action follow a delay. This introduces the problem of temporal credit assignment. When feedback follows a sequence of decisions, how should the individual assign credit to the intermediate actions that comprise the sequence? Research in reinforcement learning provides 2 general solutions to this problem: model-free reinforcement learning and model-based reinforcement learning. In this review, we examine connections between stimulus-response and cognitive learning theories, habitual and goal-directed control, and model-free and model-based reinforcement learning. We then consider a range of problems related to temporal credit assignment. These include second-order conditioning and secondary reinforcers, latent learning and detour behavior, partially observable Markov decision processes, actions with distributed outcomes, and hierarchical learning. We ask whether humans and animals, when faced with these problems, behave in a manner consistent with reinforcement learning techniques. Throughout, we seek to identify neural substrates of model-free and model-based reinforcement learning. The former class of techniques is understood in terms of the neurotransmitter dopamine and its effects in the basal ganglia. The latter is understood in terms of a distributed network of regions including the prefrontal cortex, medial temporal lobes, cerebellum, and basal ganglia. Not only do reinforcement learning techniques have a natural interpretation in terms of human and animal behavior but they also provide a useful framework for understanding neural reward valuation and action selection.
Problem-Solving Practices and Complexity in School Psychology
ERIC Educational Resources Information Center
Brady, John; Espinosa, William R.
2017-01-01
How do experienced school psychologists solve problems in their practice? What can trainers of school psychologists learn about how to structure training and mentoring of graduate students from what actually happens in schools, and how can this inform our teaching at the university? This qualitative multi-interview study explored the processes…
Exploiting Explicit and Implicit Structure in Complex Optimization Problems
2014-09-24
Applications 161(1) (2014) 179-198. [4] J.P. Luna , C. Sagastizábal and M. Solodov, A class of Dantzig--Wolfe type decomposition methods for variational...inequality problems, Mathematical Programming 143 (2014) 177-209. [5] J.P. Luna , C. Sagastizábal and M. Solodov, Complementarity and
Andajani-Sutjahjo, Sari; Manderson, Lenore; Astbury, Jill
2007-03-01
In this article, we explore how Javanese women identify and speak of symptoms of depression in late pregnancy and early postpartum and describe their subjective accounts of mood disorders. The study, conducted in the East Java region of Indonesia in 2000, involved in-depth interviews with a subgroup of women (N = 41) who scored above the cutoff score of 12/13 on the Edinburgh Postnatal Depression Scale (EPDS) during pregnancy, at six weeks postpartum, or on both occasions. This sample was taken from a larger cohort study (N cohort = 488) researching the sociocultural factors that contribute to women's emotional well-being in early motherhood. The women used a variety of Indonesian and Javanese terms to explain their emotional states during pregnancy and in early postpartum, some of which coincided with the feelings described on the EPDS and others of which did not. Women attributed their mood variations to multiple causes including: premarital pregnancy, chronic illness in the family, marital problems, lack of support from partners or family networks, their husband's unemployment, and insufficient family income due to giving up their own paid work. We argue for the importance of understanding the context of childbearing in order to interpret the meaning of depression within complex social, cultural, and economic contexts.
Games that Enlist Collective Intelligence to Solve Complex Scientific Problems
Burnett, Stephen; Furlong, Michelle; Melvin, Paul Guy; Singiser, Richard
2016-01-01
There is great value in employing the collective problem-solving power of large groups of people. Technological advances have allowed computer games to be utilized by a diverse population to solve problems. Science games are becoming more popular and cover various areas such as sequence alignments, DNA base-pairing, and protein and RNA folding. While these tools have been developed for the general population, they can also be used effectively in the classroom to teach students about various topics. Many games also employ a social component that entices students to continue playing and thereby to continue learning. The basic functions of game play and the potential of game play as a tool in the classroom are discussed in this article. PMID:27047610
Games that Enlist Collective Intelligence to Solve Complex Scientific Problems.
Burnett, Stephen; Furlong, Michelle; Melvin, Paul Guy; Singiser, Richard
2016-03-01
There is great value in employing the collective problem-solving power of large groups of people. Technological advances have allowed computer games to be utilized by a diverse population to solve problems. Science games are becoming more popular and cover various areas such as sequence alignments, DNA base-pairing, and protein and RNA folding. While these tools have been developed for the general population, they can also be used effectively in the classroom to teach students about various topics. Many games also employ a social component that entices students to continue playing and thereby to continue learning. The basic functions of game play and the potential of game play as a tool in the classroom are discussed in this article.
On the Complexity of the Asymmetric VPN Problem
NASA Astrophysics Data System (ADS)
Rothvoß, Thomas; Sanità, Laura
We give the first constant factor approximation algorithm for the asymmetric Virtual Private Network (textsc{Vpn}) problem with arbitrary concave costs. We even show the stronger result, that there is always a tree solution of cost at most 2·OPT and that a tree solution of (expected) cost at most 49.84·OPT can be determined in polynomial time.
Navigating complex decision spaces: Problems and paradigms in sequential choice
Walsh, Matthew M.; Anderson, John R.
2015-01-01
To behave adaptively, we must learn from the consequences of our actions. Doing so is difficult when the consequences of an action follow a delay. This introduces the problem of temporal credit assignment. When feedback follows a sequence of decisions, how should the individual assign credit to the intermediate actions that comprise the sequence? Research in reinforcement learning provides two general solutions to this problem: model-free reinforcement learning and model-based reinforcement learning. In this review, we examine connections between stimulus-response and cognitive learning theories, habitual and goal-directed control, and model-free and model-based reinforcement learning. We then consider a range of problems related to temporal credit assignment. These include second-order conditioning and secondary reinforcers, latent learning and detour behavior, partially observable Markov decision processes, actions with distributed outcomes, and hierarchical learning. We ask whether humans and animals, when faced with these problems, behave in a manner consistent with reinforcement learning techniques. Throughout, we seek to identify neural substrates of model-free and model-based reinforcement learning. The former class of techniques is understood in terms of the neurotransmitter dopamine and its effects in the basal ganglia. The latter is understood in terms of a distributed network of regions including the prefrontal cortex, medial temporal lobes cerebellum, and basal ganglia. Not only do reinforcement learning techniques have a natural interpretation in terms of human and animal behavior, but they also provide a useful framework for understanding neural reward valuation and action selection. PMID:23834192
Zoonoses, One Health and complexity: wicked problems and constructive conflict
2017-01-01
Infectious zoonoses emerge from complex interactions among social and ecological systems. Understanding this complexity requires the accommodation of multiple, often conflicting, perspectives and narratives, rooted in different value systems and temporal–spatial scales. Therefore, to be adaptive, successful and sustainable, One Health approaches necessarily entail conflicts among observers, practitioners and scholars. Nevertheless, these integrative approaches have, both implicitly and explicitly, tended to marginalize some perspectives and prioritize others, resulting in a kind of technocratic tyranny. An important function of One Health approaches should be to facilitate and manage those conflicts, rather than to impose solutions. This article is part of the themed issue ‘One Health for a changing world: zoonoses, ecosystems and human well-being’. PMID:28584179
Problem-oriented stereo vision quality evaluation complex
NASA Astrophysics Data System (ADS)
Sidorchuk, D.; Gusamutdinova, N.; Konovalenko, I.; Ershov, E.
2015-12-01
We describe an original low cost hardware setting for efficient testing of stereo vision algorithms. The method uses a combination of a special hardware setup and mathematical model and is easy to construct, precise in applications of our interest. For a known scene we derive its analytical representation, called virtual scene. Using a four point correspondence between the scene and virtual one we compute extrinsic camera parameters, and project virtual scene on the image plane, which is the ground truth for depth map. Another result, presented in this paper, is a new depth map quality metric. Its main purpose is to tune stereo algorithms for particular problem, e.g. obstacle avoidance.
Assessing Complex Problem-Solving Skills and Knowledge Assembly Using Web-Based Hypermedia Design.
ERIC Educational Resources Information Center
Dabbagh, Nada
This research project studied the effects of hierarchical versus heterarchical hypermedia structures of Web-based case representations on complex problem-solving skills and knowledge assembly in problem-centered learning environments in order to develop a system or model that informs the design of Web-based cases for ill-structured problems across…
On the problem of constructing a modern, economic radiotelescope complex
NASA Technical Reports Server (NTRS)
Bogomolov, A. F.; Sokolov, A. G.; Poperechenko, B. A.; Polyak, V. S.
1977-01-01
Criteria for comparing and planning the technical and economic characteristics of large parabolic reflector antenna systems and other types used in radioastronomy and deep space communications are discussed. The experience gained in making and optimizing a series of highly efficient parabolic antennas in the USSR is reviewed. Several ways are indicated for further improving the complex characteristics of antennas similar to the original TNA-1500 64m radio telescope. The suggestions can be applied in planning the characteristics of radiotelescopes which are now being built, in particular, the TNA-8000 with a diameter of 128 m.
Gonçalves-Araujo, Rafael; Wiegmann, Sonja; Torrecilla, Elena; Bardaji, Raul; Röttgers, Rüdiger; Bracher, Astrid; Piera, Jaume
2017-01-01
The detection and prediction of changes in coastal ecosystems require a better understanding of the complex physical, chemical and biological interactions, which involves that observations should be performed continuously. For this reason, there is an increasing demand for small, simple and cost-effective in situ sensors to analyze complex coastal waters at a broad range of scales. In this context, this study seeks to explore the potential of beam attenuation spectra, c(λ), measured in situ with an advanced-technology optical transmissometer, for assessing temporal and spatial patterns in the complex estuarine waters of Alfacs Bay (NW Mediterranean) as a test site. In particular, the information contained in the spectral beam attenuation coefficient was assessed and linked with different biogeochemical variables. The attenuation at λ = 710 nm was used as a proxy for particle concentration, TSM, whereas a novel parameter was adopted as an optical indicator for chlorophyll a (Chl-a) concentration, based on the local maximum of c(λ) observed at the long-wavelength side of the red band Chl-a absorption peak. In addition, since coloured dissolved organic matter (CDOM) has an important influence on the beam attenuation spectral shape and complementary measurements of particle size distribution were available, the beam attenuation spectral slope was used to analyze the CDOM content. Results were successfully compared with optical and biogeochemical variables from laboratory analysis of collocated water samples, and statistically significant correlations were found between the attenuation proxies and the biogeochemical variables TSM, Chl-a and CDOM. This outcome depicted the potential of high-frequency beam attenuation measurements as a simple, continuous and cost-effective approach for rapid detection of changes and patterns in biogeochemical properties in complex coastal environments. PMID:28107539
THE PROBLEM WITH ANIONS IN THE DOE COMPLEX
Lumetta, Gregg J.; Singh, R.P, and Moyer, B.A.
2004-08-01
Weapons production and research and development operations at various U.S. Department of Energy (DOE) sites have left a huge legacy of environmental contamination and risk. The most glaring of these problems is the 3.4 x 10{sup 5} m{sup 3} of high-level wastes stored in tanks at the Savannah River, Hanford, Idaho National Engineering and Environmental Laboratory, and Oak Ridge sites. The conversion of these tank wastes into stable waste forms for permanent disposition is arguably the largest environmental remediation effort ever undertaken. The management of anions in the wastes plays a critical role in processing these wastes. Anions can be hazardous in themselves, or they can complicate the waste-management process. The role of the following key anions in processing and immobilizing the DOE tank wastes will be discussed: phosphate, sulfate, chromate, and pertechnetate. This paper will also review work that has been done with actual wastes relevant to separating these anions.
Complex orthodontic problems: the orthognathic patient with temporomandibular disorders.
Thomas, P M; Tucker, M R
1999-12-01
The diagnosis and treatment of temporomandibular disorders (TMD) remain controversial despite considerable research and publication in this area. The relationship of these problems to dental and skeletal malocclusion is equally debatable. Recent studies suggest that although malocclusion may have a role, it is a small one. Accordingly, treatment of TMD with occlusion-altering therapy, such as orthodontics and orthognathic surgery, should be limited to specific situations. This report discusses the management of patients with coexisting TMD and skeletal malocclusion. Current concepts in clinical and radiographic diagnosis are discussed, as well as an overview of noninvasive therapy. A case report is used to illustrate an approach to diagnosis and treatment planning in an individual with active TMD and a skeletal malocclusion requiring orthognathic surgery for correction.
Faksri, Kiatichai; Xia, Eryu; Tan, Jun Hao; Teo, Yik-Ying; Ong, Rick Twee-Hee
2016-11-02
Whole-genome sequencing is increasingly used in clinical diagnosis of tuberculosis and study of Mycobacterium tuberculosis complex (MTC). MTC consists of several genetically homogenous mycobacteria species which can cause tuberculosis in humans and animals. Regions of difference (RDs) are commonly regarded as gold standard genetic markers for MTC classification. We develop RD-Analyzer, a tool that can accurately infer the species and lineage of MTC isolates from sequence reads based on the presence and absence of a set of 31 RDs. Applied on a publicly available diverse set of 377 sequenced MTC isolates from known major species and lineages, RD-Analyzer achieved an accuracy of 98.14 % (370/377) in species prediction and a concordance of 98.47 % (257/261) in Mycobacterium tuberculosis lineage prediction compared to predictions based on single nucleotide polymorphism markers. By comparing respective sequencing read depths on each genomic position between isolates of different sublineages, we were able to identify the known RD markers in different sublineages of Lineage 4 and provide support for six potential delineating markers having high sensitivities and specificities for sublineage prediction. An extended version of RD-Analyzer was thus developed to allow user-defined RDs for lineage prediction. RD-Analyzer is a useful and accurate tool for species, lineage and sublineage prediction using known RDs of MTC from sequence reads and is extendable to accepting user-defined RDs for analysis. RD-Analyzer is written in Python and is freely available at https://github.com/xiaeryu/RD-Analyzer .
Nonlinear problems of complex natural systems: Sun and climate dynamics.
Bershadskii, A
2013-01-13
The universal role of the nonlinear one-third subharmonic resonance mechanism in generation of strong fluctuations in complex natural dynamical systems related to global climate is discussed using wavelet regression detrended data. The role of the oceanic Rossby waves in the year-scale global temperature fluctuations and the nonlinear resonance contribution to the El Niño phenomenon have been discussed in detail. The large fluctuations in the reconstructed temperature on millennial time scales (Antarctic ice core data for the past 400,000 years) are also shown to be dominated by the one-third subharmonic resonance, presumably related to the Earth's precession effect on the energy that the intertropical regions receive from the Sun. The effects of galactic turbulence on the temperature fluctuations are also discussed.
Conjecture on the interlacing of zeros in complex Sturm-Liouville problems
Bender, Carl M.; Boettcher, Stefan; Savage, Van M.
2000-09-01
The zeros of the eigenfunctions of self-adjoint Sturm-Liouville eigenvalue problems interlace. For these problems interlacing is crucial for completeness. For the complex Sturm-Liouville problem associated with the Schroedinger equation for a non-Hermitian PT-symmetric Hamiltonian, completeness and interlacing of zeros have never been examined. This paper reports a numerical study of the Sturm-Liouville problems for three complex potentials, the large-N limit of a -(ix){sup N} potential, a quasiexactly-solvable -x{sup 4} potential, and an ix{sup 3} potential. In all cases the complex zeros of the eigenfunctions exhibit a similar pattern of interlacing and it is conjectured that this pattern is universal. Understanding this pattern could provide insight into whether the eigenfunctions of complex Sturm-Liouville problems form a complete set. (c) 2000 American Institute of Physics.
Students' conceptual performance on synthesis physics problems with varying mathematical complexity
NASA Astrophysics Data System (ADS)
Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan
2017-06-01
A body of research on physics problem solving has focused on single-concept problems. In this study we use "synthesis problems" that involve multiple concepts typically taught in different chapters. We use two types of synthesis problems, sequential and simultaneous synthesis tasks. Sequential problems require a consecutive application of fundamental principles, and simultaneous problems require a concurrent application of pertinent concepts. We explore students' conceptual performance when they solve quantitative synthesis problems with varying mathematical complexity. Conceptual performance refers to the identification, follow-up, and correct application of the pertinent concepts. Mathematical complexity is determined by the type and the number of equations to be manipulated concurrently due to the number of unknowns in each equation. Data were collected from written tasks and individual interviews administered to physics major students (N =179 ) enrolled in a second year mechanics course. The results indicate that mathematical complexity does not impact students' conceptual performance on the sequential tasks. In contrast, for the simultaneous problems, mathematical complexity negatively influences the students' conceptual performance. This difference may be explained by the students' familiarity with and confidence in particular concepts coupled with cognitive load associated with manipulating complex quantitative equations. Another explanation pertains to the type of synthesis problems, either sequential or simultaneous task. The students split the situation presented in the sequential synthesis tasks into segments but treated the situation in the simultaneous synthesis tasks as a single event.
The solution of the optimization problem of small energy complexes using linear programming methods
NASA Astrophysics Data System (ADS)
Ivanin, O. A.; Director, L. B.
2016-11-01
Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.
Beyond pure parasystole: promises and problems in modeling complex arrhythmias.
Courtemanche, M; Glass, L; Rosengarten, M D; Goldberger, A L
1989-08-01
The dynamics of pure parasystole, a cardiac arrhythmia in which two competing pacemakers fire independently, have recently been fully characterized. This model is now extended in an attempt to account for the more complex dynamics occurring with modulated parasystole, in which there exists nonlinear interaction between the sinus node and the ectopic ventricular focus. Theoretical analysis of modulated parasystole reveals three types of dynamics: entrainment, quasiperiodicity, and chaos. Rhythms associated with quasiperiodicity obey a set of rules derived from pure parasystole. This model is applied to the interpretation of continuous electrocardiographic data sets from three patients with complicated patterns of ventricular ectopic activity. We describe several new statistical properties of these records, related to the number of intervening sinus beats between ectopic events, that are essential in characterizing the dynamics and testing mathematical models. Detailed comparison between data and theory in these cases show substantial areas of agreement as well as potentially important discrepancies. These findings have implications for understanding the dynamics of the heartbeat in normal and pathological conditions.
Nuclear processing - a simple cost equation or a complex problem?
Banfield, Z.; Banford, A.W.; Hanson, B.C.; Scully, P.J.
2007-07-01
BNFL has extensive experience of nuclear processing plant from concept through to decommissioning, at all stages of the fuel cycle. Nexia Solutions (formerly BNFL's R and D Division) has always supported BNFL in development of concept plant, including the development of costed plant designs for the purpose of economic evaluation and technology selection. Having undertaken such studies over a number of years, this has enabled Nexia Solutions to develop a portfolio of costed plant designs for a broad range of nuclear processes, throughputs and technologies. This work has led to an extensive understanding of the relationship of the cost of nuclear processing plant, and how this can be impacted by scale of process, and the selection of design philosophy. The relationship has been seen to be non linear and so simplistic equations do not apply, the relationship is complex due to the variety of contributory factors. This is particularly evident when considering the scale of a process, for example how step changes in design occurs with increasing scale, how the applicability of technology options can vary with scale etc... This paper will explore the contributory factor of scale to nuclear processing plant costs. (authors)
Function allocation in complex systems: reframing an old problem.
Challenger, Rose; Clegg, Chris W; Shepherd, Craig
2013-01-01
In this article, we offer a new, macroergonomics perspective on the long-debated issue of function allocation. We believe thinking in this domain needs to be realigned, moving away from the traditional microergonomics conceptualisation, concerned predominantly with task-based decisions, and towards a macroergonomics approach, viewing function allocation choices as central to effective systems design. We frame our arguments within a systems perspective, advocating that function allocation issues need to be on the agenda of all individuals with a wider interest in the human and organisational aspects of complex work systems, including people who commission, sponsor, design, implement and use such systems. We also argue that allocation decisions should form a transparent, explicit stage early in the systems design and development process, involve multiple stakeholders (including end-users), be evidence-based, framed within the language of risk and utilise iterative methods (e.g. scenarios planning techniques). This article presents a macroergonomics approach to function allocation, advocating its importance in effective systems design. Adopting a systems mindset, we argue function allocation should form an explicit stage early in the design process, involve multiple stakeholders, be evidence-based, framed within the language of risk and utilise iterative methods.
Dusty (complex) plasmas: recent developments, advances, and unsolved problems
NASA Astrophysics Data System (ADS)
Popel, Sergey
The area of dusty (complex) plasma research is a vibrant subfield of plasma physics that be-longs to frontier research in physical sciences. This area is intrinsically interdisciplinary and encompasses astrophysics, planetary science, atmospheric science, magnetic fusion energy sci-ence, and various applied technologies. The research in dusty plasma started after two major discoveries in very different areas: (1) the discovery by the Voyager 2 spacecraft in 1980 of the radial spokes in Saturn's B ring, and (2) the discovery of the early 80's growth of contaminating dust particles in plasma processing. Dusty plasmas are ubiquitous in the universe; examples are proto-planetary and solar nebulae, molecular clouds, supernovae explosions, interplanetary medium, circumsolar rings, and asteroids. Within the solar system, we have planetary rings (e.g., Saturn and Jupiter), Martian atmosphere, cometary tails and comae, dust clouds on the Moon, etc. Close to the Earth, there are noctilucent clouds and polar mesospheric summer echoes, which are clouds of tiny (charged) ice particles that are formed in the summer polar mesosphere at the altitudes of about 82-95 km. Dust and dusty plasmas are also found in the vicinity of artificial satellites and space stations. Dust also turns out to be common in labo-ratory plasmas, such as in the processing of semiconductors and in tokamaks. In processing plasmas, dust particles are actually grown in the discharge from the reactive gases used to form the plasmas. An example of the relevance of industrial dusty plasmas is the growth of silicon microcrystals for improved solar cells in the future. In fact, nanostructured polymorphous sili-con films provide solar cells with high and time stable efficiency. These nano-materials can also be used for the fabrication of ultra-large-scale integration circuits, display devices, single elec-tron devices, light emitting diodes, laser diodes, and others. In microelectronic industries, dust has to be
Kastner, Monika; Makarski, Julie; Hayden, Leigh; Durocher, Lisa; Chatterjee, Ananda; Brouwers, Melissa; Bhattacharyya, Onil
2013-09-12
Realist reviews offer a rigorous method to analyze heterogeneous data emerging from multiple disciplines as a means to develop new concepts, understand the relationships between them, and identify the evidentiary base underpinning them. However, emerging synthesis methods such as the Realist Review are not well operationalized and may be difficult for the novice researcher to grasp. The objective of this paper is to describe the development of an analytic process to organize and synthesize data from a realist review. Clinical practice guidelines have had an inconsistent and modest impact on clinical practice, which may in part be due to limitations in their design. This study illustrates the development of a transparent method for organizing and analyzing a complex data set informed by a Realist Review on guideline implementability to better understand the characteristics of guidelines that affect their uptake in practice (e.g., clarity, format). The data organization method consisted of 4 levels of refinement: 1) extraction and 2) organization of data; 3) creation of a conceptual map of guideline implementability; and 4) the development of a codebook of definitions. This new method is comprised of four steps: data extraction, data organization, development of a conceptual map, and operationalization vis-a-vis a codebook. Applying this method, we extracted 1736 guideline attributes from 278 articles into a consensus-based set of categories, and collapsed them into 5 core conceptual domains for our guideline implementability map: Language, Format, Rigor of development, Feasibility, Decision-making. This study advances analysis methods by offering a systematic approach to analyzing complex data sets where the goals are to condense, organize and identify relationships.
Measurements of student understanding on complex scientific reasoning problems
NASA Astrophysics Data System (ADS)
Izumi, Alisa Sau-Lin
While there has been much discussion of cognitive processes underlying effective scientific teaching, less is known about the response nature of assessments targeting processes of scientific reasoning specific to biology content. This study used multiple-choice (m-c) and short-answer essay student responses to evaluate progress in high-order reasoning skills. In a pilot investigation of student responses on a non-content-based test of scientific thinking, it was found that some students showed a pre-post gain on the m-c test version while showing no gain on a short-answer essay version of the same questions. This result led to a subsequent research project focused on differences between alternate versions of tests of scientific reasoning. Using m-c and written responses from biology tests targeted toward the skills of (1) reasoning with a model and (2) designing controlled experiments, test score frequencies, factor analysis, and regression models were analyzed to explore test format differences. Understanding the format differences in tests is important for the development of practical ways to identify student gains in scientific reasoning. The overall results suggested test format differences. Factor analysis revealed three interpretable factors---m-c format, genetics content, and model-based reasoning. Frequency distributions on the m-c and open explanation portions of the hybrid items revealed that many students answered the m-c portion of an item correctly but gave inadequate explanations. In other instances students answered the m-c portion incorrectly yet demonstrated sufficient explanation or answered the m-c correctly and also provided poor explanations. When trying to fit test score predictors for non-associated student measures---VSAT, MSAT, high school grade point average, or final course grade---the test scores accounted for close to zero percent of the variance. Overall, these results point to the importance of using multiple methods of testing and of
ERIC Educational Resources Information Center
Astill, Rebecca G.; Van der Heijden, Kristiaan B.; Van IJzendoorn, Marinus H.; Van Someren, Eus J. W.
2012-01-01
Clear associations of sleep, cognitive performance, and behavioral problems have been demonstrated in meta-analyses of studies in adults. This meta-analysis is the first to systematically summarize all relevant studies reporting on sleep, cognition, and behavioral problems in healthy school-age children (5-12 years old) and incorporates 86 studies…
ERIC Educational Resources Information Center
Astill, Rebecca G.; Van der Heijden, Kristiaan B.; Van IJzendoorn, Marinus H.; Van Someren, Eus J. W.
2012-01-01
Clear associations of sleep, cognitive performance, and behavioral problems have been demonstrated in meta-analyses of studies in adults. This meta-analysis is the first to systematically summarize all relevant studies reporting on sleep, cognition, and behavioral problems in healthy school-age children (5-12 years old) and incorporates 86 studies…
Percentages: The Effect of Problem Structure, Number Complexity and Calculation Format
ERIC Educational Resources Information Center
Baratta, Wendy; Price, Beth; Stacey, Kaye; Steinle, Vicki; Gvozdenko, Eugene
2010-01-01
This study reports how the difficulty of simple worded percentage problems is affected by the problem structure and the complexity of the numbers involved. We also investigate which methods students know. Results from 677 Year 8 and 9 students are reported. Overall the results indicate that more attention needs to be given to this important topic.…
A Real-Life Case Study of Audit Interactions--Resolving Messy, Complex Problems
ERIC Educational Resources Information Center
Beattie, Vivien; Fearnley, Stella; Hines, Tony
2012-01-01
Real-life accounting and auditing problems are often complex and messy, requiring the synthesis of technical knowledge in addition to the application of generic skills. To help students acquire the necessary skills to deal with these problems effectively, educators have called for the use of case-based methods. Cases based on real situations (such…
An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups
ERIC Educational Resources Information Center
Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi
2012-01-01
The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…
Students' Problem-Solving in a Complex Technology-Based Learning Environment.
ERIC Educational Resources Information Center
Suomala, Jyrki; Alamaki, Ari; Alajaaski, Jarkko
The goals of this study were to investigate problem-solving in a context that requires a rich interaction among social, motivational, and cognitive processes and to compare the effects of the mediated and discovery models of learning on students' problem-solving processes in the complex technology-based learning environment. Subjects were 88…
A Real-Life Case Study of Audit Interactions--Resolving Messy, Complex Problems
ERIC Educational Resources Information Center
Beattie, Vivien; Fearnley, Stella; Hines, Tony
2012-01-01
Real-life accounting and auditing problems are often complex and messy, requiring the synthesis of technical knowledge in addition to the application of generic skills. To help students acquire the necessary skills to deal with these problems effectively, educators have called for the use of case-based methods. Cases based on real situations (such…
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-man
2012-01-01
Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…
ERIC Educational Resources Information Center
Simic, Andrei
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This guide focuses on the ethnology of traditional and complex societies. Part I, Simple and Complex Societies, includes three sections: (1) Introduction: Anthropologists…
Guidetti, Marco; Young, A P
2011-07-01
We determine the complexity of several constraint-satisfaction problems using the heuristic algorithm WalkSAT. At large sizes N, the complexity increases exponentially with N in all cases. Perhaps surprisingly, out of all the models studied, the hardest for WalkSAT is the one for which there is a polynomial time algorithm.
ERIC Educational Resources Information Center
Simic, Andrei
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This guide focuses on the ethnology of traditional and complex societies. Part I, Simple and Complex Societies, includes three sections: (1) Introduction: Anthropologists…
ERIC Educational Resources Information Center
Wüstenberg, Sascha; Greiff, Samuel; Vainikainen, Mari-Pauliina; Murphy, Kevin
2016-01-01
Changes in the demands posed by increasingly complex workplaces in the 21st century have raised the importance of nonroutine skills such as complex problem solving (CPS). However, little is known about the antecedents and outcomes of CPS, especially with regard to malleable external factors such as classroom climate. To investigate the relations…
ERIC Educational Resources Information Center
Wüstenberg, Sascha; Greiff, Samuel; Vainikainen, Mari-Pauliina; Murphy, Kevin
2016-01-01
Changes in the demands posed by increasingly complex workplaces in the 21st century have raised the importance of nonroutine skills such as complex problem solving (CPS). However, little is known about the antecedents and outcomes of CPS, especially with regard to malleable external factors such as classroom climate. To investigate the relations…
Quantum computational complexity of the N-representability problem: QMA complete.
Liu, Yi-Kai; Christandl, Matthias; Verstraete, F
2007-03-16
We study the computational complexity of the N-representability problem in quantum chemistry. We show that this problem is quantum Merlin-Arthur complete, which is the quantum generalization of nondeterministic polynomial time complete. Our proof uses a simple mapping from spin systems to fermionic systems, as well as a convex optimization technique that reduces the problem of finding ground states to N representability.
Matson, Kevin D.; Tieleman, B. Irene
2011-01-01
The immune system is a complex collection of interrelated and overlapping solutions to the problem of disease. To deal with this complexity, researchers have devised multiple ways to measure immune function and to analyze the resulting data. In this way both organisms and researchers employ many tactics to solve a complex problem. One challenge facing ecological immunologists is the question of how these many dimensions of immune function can be synthesized to facilitate meaningful interpretations and conclusions. We tackle this challenge by employing and comparing several statistical methods, which we used to test assumptions about how multiple aspects of immune function are related at different organizational levels. We analyzed three distinct datasets that characterized 1) species, 2) subspecies, and 3) among- and within-individual level differences in the relationships among multiple immune indices. Specifically, we used common principal components analysis (CPCA) and two simpler approaches, pair-wise correlations and correlation circles. We also provide a simple example of how these techniques could be used to analyze data from multiple studies. Our findings lead to several general conclusions. First, relationships among indices of immune function may be consistent among some organizational groups (e.g. months over the annual cycle) but not others (e.g. species); therefore any assumption of consistency requires testing before further analyses. Second, simple statistical techniques used in conjunction with more complex multivariate methods give a clearer and more robust picture of immune function than using complex statistics alone. Moreover, these simpler approaches have potential for analyzing comparable data from multiple studies, especially as the field of ecological immunology moves towards greater methodological standardization. PMID:21526186
Computer Technology and Complex Problem Solving: Issues in the Study of Complex Cognitive Activity.
ERIC Educational Resources Information Center
Goldman, Susan R.; Zech, Linda K.; Biswas, Gautam; Noser, Tom; Bateman, Helen; Bransford, John; Crews, Thaddeus; Moore, Allison; Nathan, Mitchell; Owens, Stephen
1999-01-01
Examines mathematics problem solving in a computer software environment using graphical representations of the results of simulations with adolescent students. Discusses the strengths and limitations of inferring goals and plans, the use of verbal protocols, and ways for computer-based learning environments to scaffold acquisition of domain…
Ratliff, Eric A; Kaduri, Pamela; Masao, Frank; Mbwambo, Jessie K K; McCurdy, Sheryl A
2016-04-01
Contrary to popular belief, policies on drug use are not always based on scientific evidence or composed in a rational manner. Rather, decisions concerning drug policies reflect the negotiation of actors' ambitions, values, and facts as they organize in different ways around the perceived problems associated with illicit drug use. Drug policy is thus best represented as a complex adaptive system (CAS) that is dynamic, self-organizing, and coevolving. In this analysis, we use a CAS framework to examine how harm reduction emerged around heroin trafficking and use in Tanzania over the past thirty years (1985-present). This account is an organizational ethnography based on of the observant participation of the authors as actors within this system. We review the dynamic history and self-organizing nature of harm reduction, noting how interactions among system actors and components have coevolved with patterns of heroin us, policing, and treatment activities over time. Using a CAS framework, we describe harm reduction as a complex process where ambitions, values, facts, and technologies interact in the Tanzanian sociopolitical environment. We review the dynamic history and self-organizing nature of heroin policies, noting how the interactions within and between competing prohibitionist and harm reduction policies have changed with patterns of heroin use, policing, and treatment activities over time. Actors learn from their experiences to organize with other actors, align their values and facts, and implement new policies. Using a CAS approach provides researchers and policy actors a better understanding of patterns and intricacies in drug policy. This knowledge of how the system works can help improve the policy process through adaptive action to introduce new actors, different ideas, and avenues for communication into the system.
Ratliff, Eric A.; Kaduri, Pamela; Masao, Frank; Mbwambo, Jessie K.K.; McCurdy, Sheryl A.
2016-01-01
Contrary to popular belief, policies on drug use are not always based on scientific evidence or composed in a rational manner. Rather, decisions concerning drug policies reflect the negotiation of actors’ ambitions, values, and facts as they organize in different ways around the perceived problems associated with illicit drug use. Drug policy is thus best represented as a complex adaptive system (CAS) that is dynamic, self-organizing, and coevolving. In this analysis, we use a CAS framework to examine how harm reduction emerged around heroin trafficking and use in Tanzania over the past thirty years (1985-present). This account is an organizational ethnography based on of the observant participation of the authors as actors within this system. We review the dynamic history and self-organizing nature of harm reduction, noting how interactions among system actors and components have coevolved with patterns of heroin us, policing, and treatment activities over time. Using a CAS framework, we describe harm reduction as a complex process where ambitions, values, facts, and technologies interact in the Tanzanian socio-political environment. We review the dynamic history and self-organizing nature of heroin policies, noting how the interactions within and between competing prohibitionist and harm reduction policies have changed with patterns of heroin use, policing, and treatment activities over time. Actors learn from their experiences to organize with other actors, align their values and facts, and implement new policies. Using a CAS approach provides researchers and policy actors a better understanding of patterns and intricacies in drug policy. This knowledge of how the system works can help improve the policy process through adaptive action to introduce new actors, different ideas, and avenues for communication into the system. PMID:26790689
NASA Astrophysics Data System (ADS)
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.
2016-09-01
This lecture offers an updated review on the Generalized Integral Transform Technique (GITT), with focus on handling complex geometries, coupled problems, and nonlinear convection-diffusion, so as to illustrate some new application paradigms. Special emphasis is given to demonstrating novel developments, such as a single domain reformulation strategy that simplifies the treatment of complex geometries, an integral balance scheme in handling multiscale problems, the adoption of convective eigenvalue problems in dealing with strongly convective formulations, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Representative application examples are then provided that employ recent extensions on the Generalized Integral Transform Technique (GITT), and a few numerical results are reported to illustrate the convergence characteristics of the proposed eigenfunction expansions.
ERIC Educational Resources Information Center
van Dulmen, Manfred H. M.; Egeland, Byron
2011-01-01
We compared the predictive validity of five aggregation methods for multiple informant data on child and adolescent behavior problems. In addition, we compared the predictive validity of these aggregation methods with single informant scores. Data were derived from the Minnesota Longitudinal Study of Parents and Children (N = 175). Maternal and…
ERIC Educational Resources Information Center
van Dulmen, Manfred H. M.; Egeland, Byron
2011-01-01
We compared the predictive validity of five aggregation methods for multiple informant data on child and adolescent behavior problems. In addition, we compared the predictive validity of these aggregation methods with single informant scores. Data were derived from the Minnesota Longitudinal Study of Parents and Children (N = 175). Maternal and…
NASA Astrophysics Data System (ADS)
Steen-Eibensteiner, Janice Lee
2006-07-01
A strong science knowledge base and problem solving skills have always been highly valued for employment in the science industry. Skills currently needed for employment include being able to problem solve (Overtoom, 2000). Academia also recognizes the need for effectively teaching students to apply problem solving skills in clinical settings. This thesis investigates how students solve complex science problems in an academic setting in order to inform the development of problem solving skills for the workplace. Students' use of problem solving skills in the form of learned concepts and procedural knowledge was studied as students completed a problem that might come up in real life. Students were taking a community college sophomore biology course, Human Anatomy & Physiology II. The problem topic was negative feedback inhibition of the thyroid and parathyroid glands. The research questions answered were (1) How well do community college students use a complex of conceptual knowledge when solving a complex science problem? (2) What conceptual knowledge are community college students using correctly, incorrectly, or not using when solving a complex science problem? (3) What problem solving procedural knowledge are community college students using successfully, unsuccessfully, or not using when solving a complex science problem? From the whole class the high academic level participants performed at a mean of 72% correct on chapter test questions which was a low average to fair grade of C-. The middle and low academic participants both failed (F) the test questions (37% and 30% respectively); 29% (9/31) of the students show only a fair performance while 71% (22/31) fail. From the subset sample population of 2 students each from the high, middle, and low academic levels selected from the whole class 35% (8/23) of the concepts were used effectively, 22% (5/23) marginally, and 43% (10/23) poorly. Only 1 concept was used incorrectly by 3/6 of the students and identified as
ERIC Educational Resources Information Center
Chilvers, Amanda Leigh
2013-01-01
Researchers have noted that mathematics achievement for deaf and hard-of-hearing (d/hh) students has been a concern for many years, including the ability to problem solve. This quasi-experimental study investigates the use of the Exemplars mathematics program with students in grades 2-8 in a school for the deaf that utilizes American Sign Language…
ERIC Educational Resources Information Center
Chilvers, Amanda Leigh
2013-01-01
Researchers have noted that mathematics achievement for deaf and hard-of-hearing (d/hh) students has been a concern for many years, including the ability to problem solve. This quasi-experimental study investigates the use of the Exemplars mathematics program with students in grades 2-8 in a school for the deaf that utilizes American Sign Language…
Preparing new nurses with complexity science and problem-based learning.
Hodges, Helen F
2011-01-01
Successful nurses function effectively with adaptability, improvability, and interconnectedness, and can see emerging and unpredictable complex problems. Preparing new nurses for complexity requires a significant change in prevalent but dated nursing education models for rising graduates. The science of complexity coupled with problem-based learning and peer review contributes a feasible framework for a constructivist learning environment to examine real-time systems data; explore uncertainty, inherent patterns, and ambiguity; and develop skills for unstructured problem solving. This article describes a pilot study of a problem-based learning strategy guided by principles of complexity science in a community clinical nursing course. Thirty-five senior nursing students participated during a 3-year period. Assessments included peer review, a final project paper, reflection, and a satisfaction survey. Results were higher than expected levels of student satisfaction, increased breadth and analysis of complex data, acknowledgment of community as complex adaptive systems, and overall higher level thinking skills than in previous years. 2011, SLACK Incorporated.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Wolf, D. E.; Sinha, N.; Lee, S. H.
1986-01-01
A brief review of 2D PNS methodology is first presented which describes the specialized features of supersonic shock-capturing and subsonic pressure-split models required for the analysis of aircraft, rocket and scramjet jet mixing problems. These features include techniques for dealing with various types of embedded and interfacing subsonic regions, the inclusion of finite-rate chemistry and the direct-coupling with potential flow solutions. Preliminary 3D extensions of this PNS methodology geared to supersonic and subsonic rectangular free jet mixing problems are also reviewed. New 3D PNS work will be described which includes the development of a hybrid supersonic/subsonic free jet mixing model, and, a supersonic model geared to the analysis of turbulent mixing and combustion processes occurring in scramjet combustor/nozzle flowfields.
NASA Astrophysics Data System (ADS)
West, A. G.; Goldsmith, G. R.; Dawson, T. E.
2010-12-01
The development of isotope ratio infrared spectroscopy (IRIS) for simultaneous δ2H and δ18O analysis of liquid water samples shows much potential for affordable, simple and potentially portable isotopic analyses. IRIS has been shown to be comparable in precision and accuracy to isotope ratio mass spectrometry (IRMS) when analyzing pure water samples. However, recent studies have shown that organic contaminants in analyzed water samples may interfere with the spectroscopy leading to errors of considerable magnitude in the reported stable isotope data. Many environmental, biological and forensic studies require analyses of water containing organic contaminants in some form, yet our current methods of removing organic contaminants prior to analysis appear inadequate for IRIS. Treated plant water extracts analyzed by IRIS showed deviations as large as 35‰ (δ2H) and 11.8‰ (δ18O) from the IRMS value, indicating that trace amounts of contaminants were sufficient to disrupt IRIS analyses. However, not all organic contaminants negatively influence IRIS. For such samples, IRIS presents a labour saving method relative to IRMS. Prior to widespread use in the environmental, biological and forensic sciences, a means of obtaining reliable data from IRIS needs to be demonstrated. One approach is to use instrument-based software to flag potentially problematic spectra and output a corrected isotope value based on analysis of the spectra. We evaluate this approach on two IRIS systems and discuss the way forward for ensuring accurate stable isotope data using IRIS.
Medicines counterfeiting is a complex problem: a review of key challenges across the supply chain.
Tremblay, Michael
2013-02-01
The paper begins by asking why there is a market for counterfeit medicines, which in effect creates the problem of counterfeiting itself. Contributing factors include supply chain complexity and the lack of whole-systems thinking. These two underpin the author's view that counterfeiting is a complex (i.e. wicked) problem, and that corporate, public policy and regulatory actions need to be mindful of how their actions may be causal. The paper offers a problem-based review of key components of this complexity, viz., the knowledge end-users/consumers have of medicines; whether restrictive information policies may hamper information provision to patients; the internet's direct access to consumers; internet-enabled distribution of unsafe and counterfeit medicines; whether the internet is a parallel and competitive supply chain to legitimate routes; organised crime as an emerging medicines manufacturer and supplier and whether substandard medicines is really the bigger problem. Solutions respect the perceived complexity of the supply chain challenges. The paper identifies the need to avoid technologically-driven solutions, calling for 'technological agnosticism'. Both regulation and public policy need to reflect the dynamic nature of the problem and avoid creating perverse incentives; it may be, for instance, that medicines pricing and reimbursement policies, which affect consumer/patient access may act as market signals to counterfeiters, since this creates a cash market in cheaper drugs.
A knowledge-based tool for multilevel decomposition of a complex design problem
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.
Classical and Quantum Complexity of the Sturm-Liouville Eigenvalue Problem
2005-03-03
study of a nonlin- ear continuous problem was done in [20] for ordinary differential equations with polynomial speedups over the classical settings. The...multivariate approximation, and ordinary differential equations . Tight bit query complexity bounds are known for a number of such problems, see [14, 15, 16...Linear Algebra, SIAM, Philadelphia. [12] Gary, H. (1965), Computing Eigenvalues of Ordinary Differential Equations with Finite Differences, Mathematics
On the Critical Behaviour, Crossover Point and Complexity of the Exact Cover Problem
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Shumow, Daniel; Koga, Dennis (Technical Monitor)
2003-01-01
Research into quantum algorithms for NP-complete problems has rekindled interest in the detailed study a broad class of combinatorial problems. A recent paper applied the quantum adiabatic evolution algorithm to the Exact Cover problem for 3-sets (EC3), and provided an empirical evidence that the algorithm was polynomial. In this paper we provide a detailed study of the characteristics of the exact cover problem. We present the annealing approximation applied to EC3, which gives an over-estimate of the phase transition point. We also identify empirically the phase transition point. We also study the complexity of two classical algorithms on this problem: Davis-Putnam and Simulated Annealing. For these algorithms, EC3 is significantly easier than 3-SAT.
Abdel-Azeim, Safwat; Chermak, Edrisse; Vangone, Anna; Oliva, Romina; Cavallo, Luigi
2014-01-01
Molecular Dynamics (MD) simulations of protein complexes suffer from the lack of specific tools in the analysis step. Analyses of MD trajectories of protein complexes indeed generally rely on classical measures, such as the RMSD, RMSF and gyration radius, conceived and developed for single macromolecules. As a matter of fact, instead, researchers engaged in simulating the dynamics of a protein complex are mainly interested in characterizing the conservation/variation of its biological interface. On these bases, herein we propose a novel approach to the analysis of MD trajectories or other conformational ensembles of protein complexes, MDcons, which uses the conservation of inter-residue contacts at the interface as a measure of the similarity between different snapshots. A "consensus contact map" is also provided, where the conservation of the different contacts is drawn in a grey scale. Finally, the interface area of the complex is monitored during the simulations. To show its utility, we used this novel approach to study two protein-protein complexes with interfaces of comparable size and both dominated by hydrophilic interactions, but having binding affinities at the extremes of the experimental range. MDcons is demonstrated to be extremely useful to analyse the MD trajectories of the investigated complexes, adding important insight into the dynamic behavior of their biological interface. MDcons specifically allows the user to highlight and characterize the dynamics of the interface in protein complexes and can thus be used as a complementary tool for the analysis of MD simulations of both experimental and predicted structures of protein complexes.
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
To Live With Complexity: A Problem for Students--And for the Rest of Us.
ERIC Educational Resources Information Center
Ford, Franklin L.
1968-01-01
In articles on student unrest, there is a great tendency to oversimplify the issues and to assume that the components and stakes are the same from Minnesota to Czechoslovakia. To understand this complex phenomenon, the following questions should be answered: How many different problems, of what orders of magnitude and intensity, need to be…
ERIC Educational Resources Information Center
Zydney, Janet Mannheimer
2005-01-01
This pilot study investigated the effectiveness of a multimedia learning environment called "Pollution Solution" on eighth-grade students' ability to define a complex problem. Sixty students from four earth science classes taught by the same teacher in a New York City public school were included in the sample for this study. The classes…
Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim
2013-01-01
Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…
ERIC Educational Resources Information Center
Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.
2016-01-01
Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…
Calculating Probabilistic Distance to Solution in a Complex Problem Solving Domain
ERIC Educational Resources Information Center
Sudol, Leigh Ann; Rivers, Kelly; Harris, Thomas K.
2012-01-01
In complex problem solving domains, correct solutions are often comprised of a combination of individual components. Students usually go through several attempts, each attempt reflecting an individual solution state that can be observed during practice. Classic metrics to measure student performance over time rely on counting the number of…
Regularity of the Dirichlet problem for the complex Monge-Ampère equation.
Moriyon, R
1979-03-01
Regularity up to the boundary of the solutions of a boundary value problem for a complex Monge-Ampère equation on perturbations of an annulus in C(n) is proven. The result can be applied to the classification of such domains.
Regularity of the Dirichlet problem for the complex Monge-Ampère equation
Moriyon, Roberto
1979-01-01
Regularity up to the boundary of the solutions of a boundary value problem for a complex Monge-Ampère equation on perturbations of an annulus in Cn is proven. The result can be applied to the classification of such domains. PMID:16592626
Ecosystem services and cooperative fisheries research to address a complex fishery problem
The St. Louis River represents a complex fishery management problem. Current fishery management goals have to be developed taking into account bi-state commercial, subsistence and recreational fisheries which are valued for different characteristics by a wide range of anglers, as...
The Development of Complex Problem Solving in Adolescence: A Latent Growth Curve Analysis
ERIC Educational Resources Information Center
Frischkorn, Gidon T.; Greiff, Samuel; Wüstenberg, Sascha
2014-01-01
Complex problem solving (CPS) as a cross-curricular competence has recently attracted more attention in educational psychology as indicated by its implementation in international educational large-scale assessments such as the Programme for International Student Assessment. However, research on the development of CPS is scarce, and the few…
Assessment of Complex Problem Solving: What We Know and What We Don't Know
ERIC Educational Resources Information Center
Herde, Christoph Nils; Wüstenberg, Sascha; Greiff, Samuel
2016-01-01
Complex Problem Solving (CPS) is seen as a cross-curricular 21st century skill that has attracted interest in large-scale-assessments. In the Programme for International Student Assessment (PISA) 2012, CPS was assessed all over the world to gain information on students' skills to acquire and apply knowledge while dealing with nontransparent…
ERIC Educational Resources Information Center
Sonnleitner, Philipp; Brunner, Martin; Keller, Ulrich; Martin, Romain
2014-01-01
Whereas the assessment of complex problem solving (CPS) has received increasing attention in the context of international large-scale assessments, its fairness in regard to students' cultural background has gone largely unexplored. On the basis of a student sample of 9th-graders (N = 299), including a representative number of immigrant students (N…
ERIC Educational Resources Information Center
Lou, Yiping
2004-01-01
Online courses have been criticized for their focus on knowledge acquisition rather than on how to solve authentic complex problems, a skill that is increasingly being recognized as critical to meeting the challenges in the real world. The purpose of this study was to explore whether between-group collaboration in project-based online courses can…
ERIC Educational Resources Information Center
Sonnleitner, Philipp; Brunner, Martin; Keller, Ulrich; Martin, Romain
2014-01-01
Whereas the assessment of complex problem solving (CPS) has received increasing attention in the context of international large-scale assessments, its fairness in regard to students' cultural background has gone largely unexplored. On the basis of a student sample of 9th-graders (N = 299), including a representative number of immigrant students (N…
Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim
2013-01-01
Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…
The Development of Complex Problem Solving in Adolescence: A Latent Growth Curve Analysis
ERIC Educational Resources Information Center
Frischkorn, Gidon T.; Greiff, Samuel; Wüstenberg, Sascha
2014-01-01
Complex problem solving (CPS) as a cross-curricular competence has recently attracted more attention in educational psychology as indicated by its implementation in international educational large-scale assessments such as the Programme for International Student Assessment. However, research on the development of CPS is scarce, and the few…
Assessment of Complex Problem Solving: What We Know and What We Don't Know
ERIC Educational Resources Information Center
Herde, Christoph Nils; Wüstenberg, Sascha; Greiff, Samuel
2016-01-01
Complex Problem Solving (CPS) is seen as a cross-curricular 21st century skill that has attracted interest in large-scale-assessments. In the Programme for International Student Assessment (PISA) 2012, CPS was assessed all over the world to gain information on students' skills to acquire and apply knowledge while dealing with nontransparent…
Small-Group Problem-Based Learning as a Complex Adaptive System
ERIC Educational Resources Information Center
Mennin, Stewart
2007-01-01
Small-group problem-based learning (PBL) is widely embraced as a method of study in health professions schools and at many different levels of education. Complexity science provides a different lens with which to view and understand the application of this method. It presents new concepts and vocabulary that may be unfamiliar to practitioners of…
ERIC Educational Resources Information Center
Zydney, Janet Mannheimer
2005-01-01
This pilot study investigated the effectiveness of a multimedia learning environment called "Pollution Solution" on eighth-grade students' ability to define a complex problem. Sixty students from four earth science classes taught by the same teacher in a New York City public school were included in the sample for this study. The classes…
Ecosystem services and cooperative fisheries research to address a complex fishery problem
The St. Louis River represents a complex fishery management problem. Current fishery management goals have to be developed taking into account bi-state commercial, subsistence and recreational fisheries which are valued for different characteristics by a wide range of anglers, as...
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
The complex variable reproducing kernel particle method for elasto-plasticity problems
NASA Astrophysics Data System (ADS)
Chen, Li; Cheng, Yumin
2010-05-01
On the basis of reproducing kernel particle method (RKPM), using complex variable theory, the complex variable reproducing kernel particle method (CVRKPM) is discussed in this paper. The advantage of the CVRKPM is that the correction function of a two-dimensional problem is formed with one-dimensional basis function when the shape function is formed. Then the CVRKPM is applied to solve two-dimensional elasto-plasticity problems. The Galerkin weak form is employed to obtain the discretized system equation, the penalty method is used to apply the essential boundary conditions. And then, the CVRKPM for two-dimensional elasto-plasticity problems is formed, the corresponding formulae are obtained, and the Newton-Raphson method is used in the numerical implementation. Three numerical examples are given to show that this method in this paper is effective for elasto-plasticity analysis.
Application of the complex scaling method in solving three-body Coulomb scattering problem
NASA Astrophysics Data System (ADS)
Lazauskas, R.
2017-03-01
The three-body scattering problem in Coulombic systems is a widespread, yet unresolved problem using the mathematically rigorous methods. In this work this long-term challenge has been undertaken by combining distorted waves and Faddeev–Merkuriev equation formalisms in conjunction with the complex scaling technique to overcome the difficulties related with the boundary conditions. Unlike the common belief, it is demonstrated that the smooth complex scaling method can be applied to solve the three-body Coulomb scattering problem in a wide energy region, including the fully elastic domain and extending to the energies well beyond the atom ionization threshold. A newly developed method is used to study electron scattering on the ground states of hydrogen and positronium atoms as well as a {e}++{{H}}({n}=1)\\rightleftarrows {{p}}+{Ps}({n}=1) reaction. Where available, obtained results are compared with the experimental data and theoretical predictions, proving the accuracy and efficiency of the newly developed method.
ERIC Educational Resources Information Center
Blackburn, J. Joey; Robinson, J. Shane
2016-01-01
The purpose of this experimental study was to assess the effects of cognitive style, problem complexity, and hypothesis generation on the problem solving ability of school-based agricultural education students. Problem solving ability was defined as time to solution. Kirton's Adaption-Innovation Inventory was employed to assess students' cognitive…
ERIC Educational Resources Information Center
Blackburn, J. Joey; Robinson, J. Shane
2016-01-01
The purpose of this experimental study was to assess the effects of cognitive style, problem complexity, and hypothesis generation on the problem solving ability of school-based agricultural education students. Problem solving ability was defined as time to solution. Kirton's Adaption-Innovation Inventory was employed to assess students' cognitive…
NASA Astrophysics Data System (ADS)
Zozor, S.; Mateos, D.; Lamberti, P. W.
2014-05-01
In this paper, we propose to mix the approach underlying Bandt-Pompe permutation entropy with Lempel-Ziv complexity, to design what we call Lempel-Ziv permutation complexity. The principle consists of two steps: (i) transformation of a continuous-state series that is intrinsically multivariate or arises from embedding into a sequence of permutation vectors, where the components are the positions of the components of the initial vector when re-arranged; (ii) performing the Lempel-Ziv complexity for this series of `symbols', as part of a discrete finite-size alphabet. On the one hand, the permutation entropy of Bandt-Pompe aims at the study of the entropy of such a sequence; i.e., the entropy of patterns in a sequence (e.g., local increases or decreases). On the other hand, the Lempel-Ziv complexity of a discrete-state sequence aims at the study of the temporal organization of the symbols (i.e., the rate of compressibility of the sequence). Thus, the Lempel-Ziv permutation complexity aims to take advantage of both of these methods. The potential from such a combined approach - of a permutation procedure and a complexity analysis - is evaluated through the illustration of some simulated data and some real data. In both cases, we compare the individual approaches and the combined approach.
Aljhni, Rania; Andre, Claire; Lethier, Lydie; Guillaume, Yves Claude
2015-11-01
A carbon nanotube (CNT) stationary phase was used for the first time to study the β-cyclodextrin (β-CD) solute complexation mechanism using high performance liquid chromatography (HPLC). For this, the β-CD was added at various concentrations in the mobile phase and the effect of column temperature was studied on both the retention of a series of aniline and benzoic acid derivatives with the CNT stationary phase and their complexation mechanism with β-CD. A decrease in the solute retention factor was observed for all the studied molecules without change in the retention order. The apparent formation constant KF of the inclusion complex β-CD/solute was determined at various temperatures. Our results showed that the interaction of β-CD with both the mobile phase and the stationary phase interfered in the complex formation. The enthalpy and entropy of the complex formation (ΔHF and ΔSF) between the solute molecule and CD were determined using a thermodynamic approach. Negative enthalpies and entropies indicated that the inclusion process of the studied molecule in the CD cavity was enthalpically driven and that the hydrogen bonds between carboxylic or aniline groups and the functional groups on the β-CD rim play an important role in the complex formation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Yu-Min; Liu, Chao; Bai, Fu-Nong; Peng, Miao-Juan
2015-10-01
In this paper, based on the conjugate of the complex basis function, a new complex variable moving least-squares approximation is discussed. Then using the new approximation to obtain the shape function, an improved complex variable element-free Galerkin (ICVEFG) method is presented for two-dimensional (2D) elastoplasticity problems. Compared with the previous complex variable moving least-squares approximation, the new approximation has greater computational precision and efficiency. Using the penalty method to apply the essential boundary conditions, and using the constrained Galerkin weak form of 2D elastoplasticity to obtain the system equations, we obtain the corresponding formulae of the ICVEFG method for 2D elastoplasticity. Three selected numerical examples are presented using the ICVEFG method to show that the ICVEFG method has the advantages such as greater precision and computational efficiency over the conventional meshless methods. Project supported by the National Natural Science Foundation of China (Grant Nos. 11171208 and U1433104).
NASA Astrophysics Data System (ADS)
Cocco, S.; Monasson, R.
2001-08-01
The computational complexity of solving random 3-Satisfiability (3-SAT) problems is investigated using statistical physics concepts and techniques related to phase transitions, growth processes and (real-space) renormalization flows. 3-SAT is a representative example of hard computational tasks; it consists in knowing whether a set of αN randomly drawn logical constraints involving N Boolean variables can be satisfied altogether or not. Widely used solving procedures, as the Davis-Putnam-Loveland-Logemann (DPLL) algorithm, perform a systematic search for a solution, through a sequence of trials and errors represented by a search tree. The size of the search tree accounts for the computational complexity, i.e. the amount of computational efforts, required to achieve resolution. In the present study, we identify, using theory and numerical experiments, easy (size of the search tree scaling polynomially with N) and hard (exponential scaling) regimes as a function of the ratio α of constraints per variable. The typical complexity is explicitly calculated in the different regimes, in very good agreement with numerical simulations. Our theoretical approach is based on the analysis of the growth of the branches in the search tree under the operation of DPLL. On each branch, the initial 3-SAT problem is dynamically turned into a more generic 2+p-SAT problem, where p and 1 - p are the fractions of constraints involving three and two variables respectively. The growth of each branch is monitored by the dynamical evolution of α and p and is represented by a trajectory in the static phase diagram of the random 2+p-SAT problem. Depending on whether or not the trajectories cross the boundary between satisfiable and unsatisfiable phases, single branches or full trees are generated by DPLL, resulting in easy or hard resolutions. Our picture for the origin of complexity can be applied to other computational problems solved by branch and bound algorithms.
Kautenburger, Ralf; Beck, Horst Philipp
2007-08-03
For the long-term storage of radioactive waste, detailed information about geo-chemical behavior of radioactive and toxic metal ions under environmental conditions is necessary. Humic acid (HA) can play an important role in the immobilisation or mobilisation of metal ions due to complexation and colloid formation. Therefore, we investigate the complexation behavior of HA and its influence on the migration or retardation of selected lanthanides (europium and gadolinium as homologues of the actinides americium and curium). Two independent speciation techniques, ultrafiltration and capillary electrophoresis coupled with inductively coupled plasma mass spectrometry (CE-ICP-MS) have been compared for the study of Eu and Gd interaction with (purified Aldrich) HA. The degree of complexation of Eu and Gd in 25 mg l(-1) Aldrich HA solutions was determined with a broad range of metal loading (Eu and Gd total concentration between 10(-6) and 10(-4) mol l(-1)), ionic strength of 10 mM (NaClO4) and different pH-values. From the CE-ICP-MS electropherograms, additional information on the charge of the Eu species was obtained by the use of 1-bromopropane as neutral marker. To detect HA in the ICP-MS and separate between HA complexed and non complexed metal ions in the CE-ICP-MS, we have halogenated the HA with iodine as ICP-MS marker.
Extending XCS with Cyclic Graphs for Scalability on Complex Boolean Problems.
Iqbal, Muhammad; Browne, Will N; Zhang, Mengjie
2015-09-25
A main research direction in the field of evolutionary machine learning is to develop a scalable classifier system to solve high-dimensional problems. Recently work has begun on autonomously reusing learned building blocks of knowledge to scale from low-dimensional problems to high-dimensional ones. An XCS-based classifier system, known as XCSCFC, has been shown to be scalable, through the addition of expression tree-like code fragments, to a limit beyond standard learning classifier systems. XCSCFC is especially beneficial if the target problem can be divided into a hierarchy of subproblems and each of them is solvable in a bottom-up fashion. However, if the hierarchy of subproblems is too deep, then XCSCFC becomes impractical because of the needed computational time and thus eventually hits a limit in problem size. A limitation in this technique is the lack of a cyclic representation, which is inherent in finite state machines (FSMs). However, the evolution of FSMs is a hard task owing to the combinatorially large number of possible states, connections, and interaction. Usually this requires supervised learning to minimize inappropriate FSMs, which for high-dimensional problems necessitates subsampling or incremental testing. To avoid these constraints, this work introduces a state-machine-based encoding scheme into XCS for the first time, termed XCSSMA. The proposed system has been tested on six complex Boolean problem domains: multiplexer, majority-on, carry, even-parity, count ones, and digital design verification problems. The proposed approach outperforms XCSCFA (an XCS that computes actions) and XCSF (an XCS that computes predictions) in three of the six problem domains, while the performance in others is similar. In addition, XCSSMA evolved, for the first time, compact and human readable general classifiers (i.e., solving any n-bit problems) for the even-parity and carry problem domains, demonstrating its ability to produce scalable solutions using a
The lower bound on complexity of parallel branch-and-bound algorithm for subset sum problem
NASA Astrophysics Data System (ADS)
Kolpakov, Roman; Posypkin, Mikhail
2016-10-01
The subset sum problem is a particular case of the Boolean knapsack problem where each item has the price equal to its weight. This problem can be informally stated as searching for most dense packing of a set of items into a box with limited capacity. Recently, coarse-grain parallelization approaches to Branch-and-Bound (B&B) method attracted some attention due to the growing popularity of weakly-connected distributed computing platforms. In this paper we consider one of such approaches for solving the subset sum problem. One of the processors (manager) performs some number of B&B steps on the first stage with generating some subproblems. On the second stage, the generated subproblems are sent to other processors, one subproblem per processor. The processors solve completely the received subproblems, the manager collects all the obtained solutions and chooses the optimal one. For this algorithm we formally define the parallel execution model (frontal scheme of parallelization) and the notion of the frontal scheme complexity. We study the frontal scheme complexity for a series of subset sum problems.
Application of software complex turbo problem solver to rayleigh-taylor instability modeling
NASA Astrophysics Data System (ADS)
Fortova, S. V.; Utkin, P. S.; Shepelev, V. V.
2016-10-01
The dynamic processes which take place during high-speed impact of two metal plates with different densities are investigated using three-dimensional numerical simulations. It is shown that as a result of the impact the Rayleigh-Taylor instability forms which leads to the formation of three-dimensional ring-shaped structures on the surface of the metal with smaller density. The comparative analysis of the metals interface deformation process with the use of different equations of state is performed. The numerical study is carried out by means of special software complex Turbo Problem Solver developed by the authors. The software complex Turbo Problem Solver implements generalized approach to the construction of hydrodynamic code for various computational fluid dynamics problems. Turbo Problem Solver provides several numerical schemes and software blocks to set initial, boundary conditions and mass forces. The solution of test problem about Rayleigh-Taylor instability growth and development for the case of very rapid density growth is also presented.
Generalized CNF satisfiability, local reductions and complexity of succinctly specified problems
Marathe, M.V.; Hunt, H.B. III; Stearns, R.E.; Radhakrishnan, V.
1995-02-01
We, study the complexity and efficient approximability of various decision, counting and optimization problems when instances are specified using (1) the 1-dimensional finite periodic narrow specifications of Wanke, (2) the 2-way infinite 1-dimensional narrow periodic (sometimes called dynamic) specifications of Karp and Orlin et al., and (3) the hierarchical specification language of Lengauer et al. We outline how generalized CNF satisfiability problems and local reductions can be used to obtain both hardness and easiness results for a number of decision, counting, optimization and approximate optimization problems when instances are specified as in (1), (2) or (3). As corollaries we obtain a number of new PSPACE-hardness and {number_sign}PSPACE-hardness,9 results and a number of new polynomial time approximation algorithms for natural PSPACE-hard optimization problems. In particular assuming P {ne} PSPACE, we characterize completely the complexities of the generalized CNF satisfiability problems SAT(S) of Schaefer [Sc78], when instances are specified as in (1), (2) or (3).
Heredia, Henny; Artmann, Elizabeth; López, Nora; Useche, Julio
2011-03-01
This article analyzes the application of the explanatory moment of the Strategic Situational Planning (SSP) and the Analysis of the Situation of Health (ASIS), as approaches that together, allow to prioritize with a look from the equity problems of health in the local level feasible of intervention. By using the case study developed in the parish Zuata of Aragua State, Venezuela, it can be observed the application of both approaches The main actors of the above mentioned parish prioritized the low coverage of drinkable water, like a health problem. On having analyzed the problem, the following causes were selected to prepare the proposed action: scarce community participation, weakness of governmental plans, absence of political town-planning, inadequate administration of the public resources and lack of conscience in the rational use of water. At the end one concludes that the joint PES-ASIS allows to generate inputs that concretized for the actors in an action plan, they can contribute in the reduction of inequities. Also, the active participation of the actors allows to demonstrate the real problems of the population and to construct a plan of demands.
ERIC Educational Resources Information Center
Hodis, Flaviu A.; Tait, Carolyn; Hodis, Georgeta M.; Hodis, Monica A.; Scornavacca, Eusebio
2016-01-01
This research investigated the interrelations among achievement goals and the underlying reasons for pursuing them. To do so, it utilized the framework of goal complexes, which are regulatory constructs defined at the intersection of aims and reasons. Data from two independent large samples of New Zealand university students showed that across…
ERIC Educational Resources Information Center
Hodis, Flaviu A.; Tait, Carolyn; Hodis, Georgeta M.; Hodis, Monica A.; Scornavacca, Eusebio
2016-01-01
This research investigated the interrelations among achievement goals and the underlying reasons for pursuing them. To do so, it utilized the framework of goal complexes, which are regulatory constructs defined at the intersection of aims and reasons. Data from two independent large samples of New Zealand university students showed that across…
Cybersecurity vulnerabilities in medical devices: a complex environment and multifaceted problem.
Williams, Patricia Ah; Woodward, Andrew J
2015-01-01
The increased connectivity to existing computer networks has exposed medical devices to cybersecurity vulnerabilities from which they were previously shielded. For the prevention of cybersecurity incidents, it is important to recognize the complexity of the operational environment as well as to catalog the technical vulnerabilities. Cybersecurity protection is not just a technical issue; it is a richer and more intricate problem to solve. A review of the factors that contribute to such a potentially insecure environment, together with the identification of the vulnerabilities, is important for understanding why these vulnerabilities persist and what the solution space should look like. This multifaceted problem must be viewed from a systemic perspective if adequate protection is to be put in place and patient safety concerns addressed. This requires technical controls, governance, resilience measures, consolidated reporting, context expertise, regulation, and standards. It is evident that a coordinated, proactive approach to address this complex challenge is essential. In the interim, patient safety is under threat.
Cybersecurity vulnerabilities in medical devices: a complex environment and multifaceted problem
Williams, Patricia AH; Woodward, Andrew J
2015-01-01
The increased connectivity to existing computer networks has exposed medical devices to cybersecurity vulnerabilities from which they were previously shielded. For the prevention of cybersecurity incidents, it is important to recognize the complexity of the operational environment as well as to catalog the technical vulnerabilities. Cybersecurity protection is not just a technical issue; it is a richer and more intricate problem to solve. A review of the factors that contribute to such a potentially insecure environment, together with the identification of the vulnerabilities, is important for understanding why these vulnerabilities persist and what the solution space should look like. This multifaceted problem must be viewed from a systemic perspective if adequate protection is to be put in place and patient safety concerns addressed. This requires technical controls, governance, resilience measures, consolidated reporting, context expertise, regulation, and standards. It is evident that a coordinated, proactive approach to address this complex challenge is essential. In the interim, patient safety is under threat. PMID:26229513
Divide et impera: subgoaling reduces the complexity of probabilistic inference and problem solving
Maisto, Domenico; Donnarumma, Francesco; Pezzulo, Giovanni
2015-01-01
It has long been recognized that humans (and possibly other animals) usually break problems down into smaller and more manageable problems using subgoals. Despite a general consensus that subgoaling helps problem solving, it is still unclear what the mechanisms guiding online subgoal selection are during the solution of novel problems for which predefined solutions are not available. Under which conditions does subgoaling lead to optimal behaviour? When is subgoaling better than solving a problem from start to finish? Which is the best number and sequence of subgoals to solve a given problem? How are these subgoals selected during online inference? Here, we present a computational account of subgoaling in problem solving. Following Occam's razor, we propose that good subgoals are those that permit planning solutions and controlling behaviour using less information resources, thus yielding parsimony in inference and control. We implement this principle using approximate probabilistic inference: subgoals are selected using a sampling method that considers the descriptive complexity of the resulting sub-problems. We validate the proposed method using a standard reinforcement learning benchmark (four-rooms scenario) and show that the proposed method requires less inferential steps and permits selecting more compact control programs compared to an equivalent procedure without subgoaling. Furthermore, we show that the proposed method offers a mechanistic explanation of the neuronal dynamics found in the prefrontal cortex of monkeys that solve planning problems. Our computational framework provides a novel integrative perspective on subgoaling and its adaptive advantages for planning, control and learning, such as for example lowering cognitive effort and working memory load. PMID:25652466
Divide et impera: subgoaling reduces the complexity of probabilistic inference and problem solving.
Maisto, Domenico; Donnarumma, Francesco; Pezzulo, Giovanni
2015-03-06
It has long been recognized that humans (and possibly other animals) usually break problems down into smaller and more manageable problems using subgoals. Despite a general consensus that subgoaling helps problem solving, it is still unclear what the mechanisms guiding online subgoal selection are during the solution of novel problems for which predefined solutions are not available. Under which conditions does subgoaling lead to optimal behaviour? When is subgoaling better than solving a problem from start to finish? Which is the best number and sequence of subgoals to solve a given problem? How are these subgoals selected during online inference? Here, we present a computational account of subgoaling in problem solving. Following Occam's razor, we propose that good subgoals are those that permit planning solutions and controlling behaviour using less information resources, thus yielding parsimony in inference and control. We implement this principle using approximate probabilistic inference: subgoals are selected using a sampling method that considers the descriptive complexity of the resulting sub-problems. We validate the proposed method using a standard reinforcement learning benchmark (four-rooms scenario) and show that the proposed method requires less inferential steps and permits selecting more compact control programs compared to an equivalent procedure without subgoaling. Furthermore, we show that the proposed method offers a mechanistic explanation of the neuronal dynamics found in the prefrontal cortex of monkeys that solve planning problems. Our computational framework provides a novel integrative perspective on subgoaling and its adaptive advantages for planning, control and learning, such as for example lowering cognitive effort and working memory load. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Solving the three-body Coulomb breakup problem using exterior complex scaling
McCurdy, C.W.; Baertschy, M.; Rescigno, T.N.
2004-05-17
Electron-impact ionization of the hydrogen atom is the prototypical three-body Coulomb breakup problem in quantum mechanics. The combination of subtle correlation effects and the difficult boundary conditions required to describe two electrons in the continuum have made this one of the outstanding challenges of atomic physics. A complete solution of this problem in the form of a ''reduction to computation'' of all aspects of the physics is given by the application of exterior complex scaling, a modern variant of the mathematical tool of analytic continuation of the electronic coordinates into the complex plane that was used historically to establish the formal analytic properties of the scattering matrix. This review first discusses the essential difficulties of the three-body Coulomb breakup problem in quantum mechanics. It then describes the formal basis of exterior complex scaling of electronic coordinates as well as the details of its numerical implementation using a variety of methods including finite difference, finite elements, discrete variable representations, and B-splines. Given these numerical implementations of exterior complex scaling, the scattering wave function can be generated with arbitrary accuracy on any finite volume in the space of electronic coordinates, but there remains the fundamental problem of extracting the breakup amplitudes from it. Methods are described for evaluating these amplitudes. The question of the volume-dependent overall phase that appears in the formal theory of ionization is resolved. A summary is presented of accurate results that have been obtained for the case of electron-impact ionization of hydrogen as well as a discussion of applications to the double photoionization of helium.
Spatio-Temporal Complexity and Large-Scale Structures in Problems of Continuum Mechanic
1993-07-15
TEMPORAL COMPLEXITY & LARGE SCALE STRUCTURES IN PROBLEMS OF CONTINUUM MECHANICS" (U) 61103D 3484/D7 (URI) 6. AUTHOR(S) Drs. Basil Nicolaenko, Dieter...orbits on-the attractor. We have applied our method to two experimental data sets from Taylor-Couette flows . 14. SUBJECT TERMS -WS.-UMBER OF PAGES’ 14 S...8217 .. . .,16. PRICE CODE 17. SECURITY CLASSIFICATION 1. SECURITY CLASSIFICATION 19., SECURITY CLASSIFICATION 20. LIMITATION OFA. OF REPORT OF THIS
The Role of Prior Knowledge and Problem Contexts in Students' Explanations of Complex System
NASA Astrophysics Data System (ADS)
Barth-Cohen, Lauren April
The purpose of this dissertation is to study students' competencies in generating scientific explanations within the domain of complex systems, an interdisciplinary area in which students tend to have difficulties. While considering students' developing explanations of how complex systems work, I investigate the role of prior knowledge and how students' explanations systematically vary across seven problem contexts (e.g. the movement of sand dunes, the formation of traffic jams, and diffusion in water). Using the Knowledge in Pieces epistemological perspective, I build a mini-theory of how students construct explanations about the behavior of complex systems. The mini-theory shows how advanced, "decentralized" explanations evolve from a variety of prior knowledge resources, which depend on specific features of the problem. A general emphasis on students' competences is exhibited through three strands of analysis: (1) a focus on moment-to-moment shifts in individuals' explanations in the direction of a normative understanding; (2) a comparison of explanations across the seven problem contexts in order to highlight variation in kinds of prior knowledge that are used; and (3) a concentration on the diversity within explanations that can be all considered examples of emergent thinking. First, I document cases of students' shifting explanations as they become less prototypically centralized (a more naive causality) and then become more prototypically decentralized over short time periods. The analysis illustrates the lines of continuity between these two ways of understanding and how change can occur during the process of students generating a progression of increasingly sophisticated transitional explanations. Second, I find a variety of students' understandings across the problem contexts, expressing both variation in their prior knowledge and how the nature of a specific domain influences reasoning. Certain problem contexts are easier or harder for students
Putting the puzzle together: the role of ‘problem definition’ in complex clinical judgement
Cristancho, Sayra; Lingard, Lorelei; Forbes, Thomas; Ott, Michael; Novick, Richard
2017-01-01
CONTEXT We teach judgement in pieces; that is, we talk about each aspect separately (patient, plan, resources, technique, etc.). We also let trainees figure out how to put the pieces together. In complex situations, this might be problematic. Using data from a drawing-based study on surgeons’ experiences with complex situations, we explore the notion of ‘problem definition’ in real-world clinical judgement using the theoretical lens of systems engineering. METHODS ‘Emergence’, the sensitising concept for analysis, is rooted in two key systems premises: that person and context are inseparable and that what emerges is an act of choice. Via a ‘gallery walk’ we used these premises to perform analysis on individual drawings as well as cross-comparisons of multiple drawings. Our focus was to understand similarities and differences among the vantage points used by multiple surgeons. RESULTS In this paper we challenge two assumptions from current models of clinical judgement: that experts hold a fixed and static definition of the problem and that consequently the focus of the expert’s work is on solving the problem. Each situation described by our participants revealed different but complementary perspectives of what a surgical problem might come to be: from concerns about ensuring standard of care, to balancing personal emotions versus care choices, to coordinating resources, and to maintaining control while in the midst of personality clashes. CONCLUSION We suggest that it is only at the situation and system level, not at the individual level, that we are able to appreciate the nuances of defining the problem when experts make judgements during real-world complex situations. PMID:27943366
Putting the puzzle together: the role of 'problem definition' in complex clinical judgement.
Cristancho, Sayra; Lingard, Lorelei; Forbes, Thomas; Ott, Michael; Novick, Richard
2017-02-01
We teach judgement in pieces; that is, we talk about each aspect separately (patient, plan, resources, technique, etc.). We also let trainees figure out how to put the pieces together. In complex situations, this might be problematic. Using data from a drawing-based study on surgeons' experiences with complex situations, we explore the notion of 'problem definition' in real-world clinical judgement using the theoretical lens of systems engineering. 'Emergence', the sensitising concept for analysis, is rooted in two key systems premises: that person and context are inseparable and that what emerges is an act of choice. Via a 'gallery walk' we used these premises to perform analysis on individual drawings as well as cross-comparisons of multiple drawings. Our focus was to understand similarities and differences among the vantage points used by multiple surgeons. In this paper we challenge two assumptions from current models of clinical judgement: that experts hold a fixed and static definition of the problem and that consequently the focus of the expert's work is on solving the problem. Each situation described by our participants revealed different but complementary perspectives of what a surgical problem might come to be: from concerns about ensuring standard of care, to balancing personal emotions versus care choices, to coordinating resources, and to maintaining control while in the midst of personality clashes. We suggest that it is only at the situation and system level, not at the individual level, that we are able to appreciate the nuances of defining the problem when experts make judgements during real-world complex situations. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Migration of levonorgestrel IUS in a patient with complex medical problems: what should be done?
Soleymani Majd, Hooman; El Hamamy, Essam; Chandrasekar, Ramya; Ismail, Lamiese
2009-03-01
Patients with complex medical problems should be counselled about the need for highly effective contraception. As failure resulting in pregnancy, could cause significant morbidity and mortality. The LNG-IUS has gained great popularity and generally has a low side effect profile; however, perforation of the uterus and migration of the device is a potentially serious complication known to be associated with its use. The current accepted management is removal of the device from the abdominal cavity in order to prevent further morbidity. However this is not always a simple matter in patients who have complex medical problems and who are deemed unfit for surgery. Each time the patient comes for renewal of the contraceptive method, clinicians need to reassess the risks and benefits. This is particularly relevant in patients who have complex medical problems where special attention needs to be given, not only to immediate risks but also to long-term ones. Careful individualised counselling and consideration are paramount and perhaps it would have been prudent to discuss vasectomy with this patient and her husband (as the first line of contraception), as this may have avoided the ensuing complications arising from the chosen method.
NASA Astrophysics Data System (ADS)
Silva, Justina A.
Quantifying belowground dynamics is critical to our understanding of plant and ecosystem function and belowground carbon cycling, yet currently available tools for complex belowground image analyses are insufficient. We introduce novel techniques combining digital image processing tools and geographic information systems (GIS) analysis to permit semi-automated analysis of complex root and soil dynamics. We illustrate methodologies with imagery from microcosms, minirhizotrons, and a rhizotron, in upland and peatland soils. We provide guidelines for correct image capture, a method that automatically stitches together numerous minirhizotron images into one seamless image, and image analysis using image segmentation and classification in SPRING or change analysis in ArcMap. These methods facilitate spatial and temporal root and soil interaction studies, providing a framework to expand a more comprehensive understanding of belowground dynamics.
On the Fractality of Complex Networks: Covering Problem, Algorithms and Ahlfors Regularity
Wang, Lihong; Wang, Qin; Xi, Lifeng; Chen, Jin; Wang, Songjing; Bao, Liulu; Yu, Zhouyu; Zhao, Luming
2017-01-01
In this paper, we revisit the fractality of complex network by investigating three dimensions with respect to minimum box-covering, minimum ball-covering and average volume of balls. The first two dimensions are calculated through the minimum box-covering problem and minimum ball-covering problem. For minimum ball-covering problem, we prove its NP-completeness and propose several heuristic algorithms on its feasible solution, and we also compare the performance of these algorithms. For the third dimension, we introduce the random ball-volume algorithm. We introduce the notion of Ahlfors regularity of networks and prove that above three dimensions are the same if networks are Ahlfors regular. We also provide a class of networks satisfying Ahlfors regularity. PMID:28128289
On the Fractality of Complex Networks: Covering Problem, Algorithms and Ahlfors Regularity
NASA Astrophysics Data System (ADS)
Wang, Lihong; Wang, Qin; Xi, Lifeng; Chen, Jin; Wang, Songjing; Bao, Liulu; Yu, Zhouyu; Zhao, Luming
2017-01-01
In this paper, we revisit the fractality of complex network by investigating three dimensions with respect to minimum box-covering, minimum ball-covering and average volume of balls. The first two dimensions are calculated through the minimum box-covering problem and minimum ball-covering problem. For minimum ball-covering problem, we prove its NP-completeness and propose several heuristic algorithms on its feasible solution, and we also compare the performance of these algorithms. For the third dimension, we introduce the random ball-volume algorithm. We introduce the notion of Ahlfors regularity of networks and prove that above three dimensions are the same if networks are Ahlfors regular. We also provide a class of networks satisfying Ahlfors regularity.
Analysis and formulation of a class of complex dynamic optimization problems
NASA Astrophysics Data System (ADS)
Kameswaran, Shivakumar
The Direct Transcription approach, also known as the direct simultaneous approach, is a widely used solution strategy for the solution of dynamic optimization problems involving differential-algebraic equations (DAEs). Direct transcription refers to the procedure of approximating the infinite dimensional problem by a finite dimensional one, which is then solved using a nonlinear programming (NLP) solver tailored to large-scale problems. Systems governed by partial differential equations (PDEs) can also be handled by spatially discretizing the PDEs to convert them to a system of DAEs. The objective of this thesis is firstly to ensure that direct transcription using Radau collocation is provably correct, and secondly to widen the applicability of the direct simultaneous approach to a larger class of dynamic optimization and optimal control problems (OCPs). This thesis aims at addressing these issues using rigorous theoretical tools and/or characteristic examples, and at the same time use the results for solving large-scale industrial applications to realize the benefits. The first part of this work deals with the analysis of convergence rates for direct transcription of unconstrained and final-time equality constrained optimal control problems. The problems are discretized using collocation at Radau points. Convergence is analyzed from an NLP/matrix-algebra perspective, which enables the prediction of the conditioning of the direct transcription NLP as the mesh size becomes finer. Several convergence results are presented along with tests on numerous example problems. These convergence results lead to an adjoint estimation procedure given the Lagrange multipliers for the large-scale NLP. The work also reveals the role of process control concepts such as controllability on the convergence analysis, and provides a very important link between control and optimization inside the framework of dynamic optimization. As an effort to extend the applicability of the direct
ERIC Educational Resources Information Center
Goode, Natassia; Beckmann, Jens F.
2010-01-01
This study investigates the relationships between structural knowledge, control performance and fluid intelligence in a complex problem solving (CPS) task. 75 participants received either complete, partial or no information regarding the underlying structure of a complex problem solving task, and controlled the task to reach specific goals.…
Möser, Christina; Kautenburger, Ralf; Philipp Beck, Horst
2012-05-01
Investigations of the mobility of radioactive and nonradioactive substances in the environment are important tasks for the development of a future disposal in deep geological formations. Dissolved organic matter (DOM) can play an important role in the mobilization of metal ions due to complexation. In this study, we investigate the complexation behavior of humic acid (HA) as a model substance for DOM and its influence on the migration of europium as homologue for the actinide americium and uranium as the principal component of nuclear fuel. As speciation technique, capillary electrophoresis (CE) was hyphenated with inductively coupled plasma mass spectrometry (ICP-MS). For the study, 0.5 mg·L⁻¹ of the metals and 25 mg·L⁻¹ of (purified Aldrich) HA and an aqueous solution sodium-perchlorate with an ionic strength of 10 mM at pH 5 were used. CE-ICP-MS clearly shows the different speciation of the triple positively charged europium and the double positively charged uranyl cation with HA. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robust non-parametric tests for complex-repeated measures problems in ophthalmology.
Brombin, Chiara; Midena, Edoardo; Salmaso, Luigi
2013-12-01
The NonParametric Combination methodology (NPC) of dependent permutation tests allows the experimenter to face many complex multivariate testing problems and represents a convincing and powerful alternative to standard parametric methods. The main advantage of this approach lies in its flexibility in handling any type of variable (categorical and quantitative, with or without missing values) while at the same time taking dependencies among those variables into account without the need of modelling them. NPC methodology enables to deal with repeated measures, paired data, restricted alternative hypotheses, missing data (completely at random or not), high-dimensional and small sample size data. Hence, NPC methodology can offer a significant contribution to successful research in biomedical studies with several endpoints, since it provides reasonably efficient solutions and clear interpretations of inferential results. Pesarin F. Multivariate permutation tests: with application in biostatistics. Chichester-New York: John Wiley &Sons, 2001; Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester, UK: John Wiley &Sons, 2010. We focus on non-parametric permutation solutions to two real-case studies in ophthalmology, concerning complex-repeated measures problems. For each data set, different analyses are presented, thus highlighting characteristic aspects of the data structure itself. Our goal is to present different solutions to multivariate complex case studies, guiding researchers/readers to choose, from various possible interpretations of a problem, the one that has the highest flexibility and statistical power under a set of less stringent assumptions. MATLAB code has been implemented to carry out the analyses.
NASA Astrophysics Data System (ADS)
Agustan, S.; Juniati, Dwi; Siswono, Tatag Yuli Eko
2017-05-01
In the last few years, reflective thinking becomes very popular term in the world of education, especially in professional education of teachers. One of goals of the educational personnel and teacher institutions create responsible prospective teachers and they are able reflective thinking. Reflective thinking is a future competence that should be taught to students to face the challenges and to respond of demands of the 21st century. Reflective thinking can be applied in mathematics becauseby reflective thinking, students can improve theircuriosity to solve mathematical problem. In solving mathematical problem is assumed that cognitive style has an impact on prospective teacher's mental activity. As a consequence, reflective thinking and cognitive style are important things in solving mathematical problem. The subject, in this research paper, isa female-prospective teacher who has fielddependent cognitive style. The purpose of this research paperis to investigate the ability of prospective teachers' reflective thinking in solving mathematical problem. This research paper is a descriptive by using qualitativeapproach. To analyze the data related to prospectiveteacher's reflective thinking in solving contextual mathematicalproblem, the researchers focus in four main categories which describe prospective teacher's activities in using reflective thinking, namely; (a) formulation and synthesis of experience, (b) orderliness of experience, (c) evaluating the experience and (d) testing the selected solution based on the experience.
How to solve complex problems in foundry plants - future of casting simulation -
NASA Astrophysics Data System (ADS)
Ohnaka, I.
2015-06-01
Although the computer simulation of casting has progressed dramatically over the last decades, there are still many challenges and problems. This paper discusses how to solve complex engineering problems in foundry plants and what we should do in the future, in particular, for casting simulation. First, problem solving procedures including application of computer simulation are demonstrated and various difficulties are pointed-out exemplifying mainly porosity defects in sand castings of spheroidal graphite cast irons. Next, looking back conventional scientific and engineering research to understand casting phenomena, challenges and problems are discussed from problem solving view point, followed by discussion on the issues we should challenge such as how to integrate huge amount of dispersed knowledge in various disciplines, differentiation of science-oriented and engineering-oriented models, professional ethics, how to handle fluctuating materials, initial and boundary conditions, error accumulation, simulation codes as black-box, etc. Finally some suggestions are made on how to challenge the issues such as promotion of research on the simulation based on the science- oriented model and publication of reliable data of casting phenomena in complicated-shaped castings including reconsideration of the evaluation system.
NASA Astrophysics Data System (ADS)
Agranovich, Daniel; Polygalov, Eugene; Popov, Ivan; Ben Ishai, Paul; Feldman, Yuri
2017-03-01
One of the approaches to bypass the problem of electrode polarization in dielectric measurements is the free electrode method. The advantage of this technique is that, the probing electric field in the material is not supplied by contact electrodes, but rather by electromagnetic induction. We have designed an inductive dielectric analyzer based on a sensor comprising two concentric toroidal coils. In this work, we present an analytic derivation of the relationship between the impedance measured by the sensor and the complex dielectric permittivity of the sample. The obtained relationship was successfully employed to measure the dielectric permittivity and conductivity of various alcohols and aqueous salt solutions.
Krishnamoorthy, Janarthanan; Mohanty, Smita
2011-01-01
Isothermal titration calorimetry (ITC) is an important technique used in quantitatively analyzing the global mechanism of protein-protein or protein-ligand interactions through thermodynamic measurements. Among different binding mechanisms, the parallel and ligand induced protein oligomerization mechanisms are technically difficult to analyze compared with a sequential binding mechanism. Here, we present a methodology implemented as a program "Open-ITC" that eliminates the need for exact analytical expressions for free ligand concentrations [L] and mole fractions of bound ligand θ that are required for the thermogram analysis. Adopting a genetic algorithm-based optimization, the thermodynamic parameters are determined, and its standard error is evaluated at the global minimum by calculating the Jacobian matrix. This approach yielded a statistically consistent result for a single-site and a two-site binding protein-ligand system. Further, a comparative simulation of a two-step sequential, a parallel, and a ligand induced oligomerization model revealed that their mechanistic differences are discernable in ITC thermograms, only if the first binding step is weaker compared with the second binding step (K(1)
NASA Astrophysics Data System (ADS)
Sun, Cong; Yang, Yunchuan; Yuan, Yaxiang
2012-12-01
In this article, we investigate the interference alignment (IA) solution for a K-user MIMO interference channel. Proper users' precoders and decoders are designed through a desired signal power maximization model with IA conditions as constraints, which forms a complex matrix optimization problem. We propose two low complexity algorithms, both of which apply the Courant penalty function technique to combine the leakage interference and the desired signal power together as the new objective function. The first proposed algorithm is the modified alternating minimization algorithm (MAMA), where each subproblem has closed-form solution with an eigenvalue decomposition. To further reduce algorithm complexity, we propose a hybrid algorithm which consists of two parts. As the first part, the algorithm iterates with Householder transformation to preserve the orthogonality of precoders and decoders. In each iteration, the matrix optimization problem is considered in a sequence of 2D subspaces, which leads to one dimensional optimization subproblems. From any initial point, this algorithm obtains precoders and decoders with low leakage interference in short time. In the second part, to exploit the advantage of MAMA, it continues to iterate to perfectly align the interference from the output point of the first part. Analysis shows that in one iteration generally both proposed two algorithms have lower computational complexity than the existed maximum signal power (MSP) algorithm, and the hybrid algorithm enjoys lower complexity than MAMA. Simulations reveal that both proposed algorithms achieve similar performances as the MSP algorithm with less executing time, and show better performances than the existed alternating minimization algorithm in terms of sum rate. Besides, from the view of convergence rate, simulation results show that the MAMA enjoys fastest speed with respect to a certain sum rate value, while hybrid algorithm converges fastest to eliminate interference.
NASA Astrophysics Data System (ADS)
Howell, Bryan; McIntyre, Cameron C.
2016-06-01
Objective. Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. Approach. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Main results. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. Significance. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
ERIC Educational Resources Information Center
Bogard, Treavor; Liu, Min; Chiang, Yueh-hui Vanessa
2013-01-01
This multiple-case study examined how advanced learners solved a complex problem, focusing on how their frequency and application of cognitive processes contributed to differences in performance outcomes, and developing a mental model of a problem. Fifteen graduate students with backgrounds related to the problem context participated in the study.…
Building University Capacity to Visualize Solutions to Complex Problems in the Arctic
NASA Astrophysics Data System (ADS)
Broderson, D.; Veazey, P.; Raymond, V. L.; Kowalski, K.; Prakash, A.; Signor, B.
2016-12-01
Rapidly changing environments are creating complex problems across the globe, which are particular magnified in the Arctic. These worldwide challenges can best be addressed through diverse and interdisciplinary research teams. It is incumbent on such teams to promote co-production of knowledge and data-driven decision-making by identifying effective methods to communicate their findings and to engage with the public. Decision Theater North (DTN) is a new semi-immersive visualization system that provides a space for teams to collaborate and develop solutions to complex problems, relying on diverse sets of skills and knowledge. It provides a venue to synthesize the talents of scientists, who gather information (data); modelers, who create models of complex systems; artists, who develop visualizations; communicators, who connect and bridge populations; and policymakers, who can use the visualizations to develop sustainable solutions to pressing problems. The mission of Decision Theater North is to provide a cutting-edge visual environment to facilitate dialogue and decision-making by stakeholders including government, industry, communities and academia. We achieve this mission by adopting a multi-faceted approach reflected in the theater's design, technology, networking capabilities, user support, community relationship building, and strategic partnerships. DTN is a joint project of Alaska's National Science Foundation Experimental Program to Stimulate Competitive Research (NSF EPSCoR) and the University of Alaska Fairbanks (UAF), who have brought the facility up to full operational status and are now expanding its development space to support larger team science efforts. Based in Fairbanks, Alaska, DTN is uniquely poised to address changes taking place in the Arctic and subarctic, and is connected with a larger network of decision theaters that include the Arizona State University Decision Theater Network and the McCain Institute in Washington, DC.
On complex roots of an equation arising in the oblique derivative problem
NASA Astrophysics Data System (ADS)
Kostin, A. B.; Sherstyukov, V. B.
2017-01-01
The paper is concerned with the eigenvalue problem for the Laplace operator in a disc under the condition that the oblique derivative vanishes on the disc boundary. In a famous article by V.A. Il’in and E.I. Moiseev (Differential equations, 1994) it was found, in particular, that the root of any equation of the form with the Bessel function Jn (μ) determines the eigenvalue λ = μ 2 of the problem. In our work we correct the information about the location of eigenvalues. It is specified explicit view of the corner, containing all the eigenvalues. It is shown that all the nonzero roots of the equation are simple and given a refined description of the set of their localization on the complex plane. To prove these facts we use the partial differential equations methods and also methods of entire functions theory.
Hybrid binary GA-EDA algorithms for complex “black-box” optimization problems
NASA Astrophysics Data System (ADS)
Sopov, E.
2017-02-01
Genetic Algorithms (GAs) have proved their efficiency solving many complex optimization problems. GAs can be also applied for “black-box” problems, because they realize the “blind” search and do not require any specific information about features of search space and objectives. It is clear that a GA uses the “Trial-and-Error” strategy to explorer search space, and collects some statistical information that is stored in the form of genes in the population. Estimation of Distribution Algorithms (EDA) have very similar realization as GAs, but use an explicit representation of search experience in the form of the statistical probabilities distribution. In this study we discus some approaches for improving the standard GA performance by combining the binary GA with EDA. Finally, a novel approach for the large-scale global optimization is proposed. The experimental results and comparison with some well-studied techniques are presented and discussed.
[Present-day problems of complex hygienic evaluation of drinking water use].
Tulakin, A V; Novikov, Iu V; Tsyplakova, G V; Ampleeva, G P; Shukelaĭt', A B
2005-01-01
The authors offer substantiated methodical approaches to complex evaluation of the sanitary reliability of drinking water supply systems. They recommend not only evaluating drinking water quality, but also assessing the sanitary state of water sources (catchment areas), the reliability of water preparation and transportation, the standards of water supply and the reliability of production laboratory control. A range of complex hygienic studies have demonstrated that the problems of Voronezh interurban reservoir as a water source are caused by its multi-purpose use. Under these conditions insufficient hygienic efficiency of the conventional water preparation schemes and low sanitary reliability of water transportation systems favors negative influence of water factor on population mortality. The offered methodical approaches give the systematic idea of factors that determine drinking water quality. Operative administrative decisions concerning hygienic safety of public water use may be made with these methodical approaches taken into consideration.
Liu, Chunmei; Song, Yinglei
2011-01-01
In this paper, we study the parameterized complexity of Dominating Set problem in chordal graphs and near chordal graphs. We show the problem is W[2]-hard and cannot be solved in time no(k) in chordal and s-chordal (s > 3) graphs unless W[1]=FPT. In addition, we obtain inapproximability results for computing a minimum dominating set in chordal and near chordal graphs. Our results prove that unless NP=P, the minimum dominating set in a chordal or s-chordal (s > 3) graph cannot be approximated within a ratio of c3 ln n in polynomial time, where n is the number of vertices in the graph and 0 < c < 1 is the constant from the inapproximability of the minimum dominating set in general graphs. In other words, our results suggest that restricting to chordal or s-chordal graphs can improve the approximation ratio by no more than a factor of 3. We then extend our techniques to find similar results for the Independent Dominating Set problem and the Connected Dominating Set problem in chordal or near chordal graphs. PMID:23874144
NASA Astrophysics Data System (ADS)
Wiek, Arnim; Foley, Rider W.; Guston, David H.
2012-09-01
Nanotechnology is widely associated with the promise of positively contributing to sustainability. However, this view often focuses on end-of-pipe applications, for instance, for water purification or energy efficiency, and relies on a narrow concept of sustainability. Approaching sustainability problems and solution options from a comprehensive and systemic perspective instead may yield quite different conclusions about the contribution of nanotechnology to sustainability. This study conceptualizes sustainability problems as complex constellations with several potential intervention points and amenable to different solution options. The study presents results from interdisciplinary workshops and literature reviews that appraise the contribution of the selected nanotechnologies to mitigate such problems. The study focuses exemplarily on the urban context to make the appraisals tangible and relevant. The solution potential of nanotechnology is explored not only for well-known urban sustainability problems such as water contamination and energy use but also for less obvious ones such as childhood obesity. Results indicate not only potentials but also limitations of nanotechnology's contribution to sustainability and can inform anticipatory governance of nanotechnology in general, and in the urban context in particular.
Hill, Jennie; Powlitch, Stephanie; Furniss, Frederick
2008-01-01
The current study aimed to replicate and extend Rojahn et al. [Rojahn, J., Aman, M. G., Matson, J. L., & Mayville, E. (2003). The aberrant behavior checklist and the behavior problems inventory: Convergent and divergent validity. Research in Developmental Disabilities, 24, 391-404] by examining the convergent validity of the behavior problems inventory (BPI) and the aberrant behavior checklist (ABC) for individuals presenting with multiple complex behavior problems. Data were collected from 69 children and adults with severe intellectual disabilities and challenging behavior living in residential establishments. MANCOVA analyses showed that individuals with elevated BPI stereotyped behavior subscale scores had higher scores on ABC lethargy and stereotypy subscales, while those with elevated BPI aggressive/destructive behavior subscale scores obtained higher scores on ABC irritability, stereotypy and hyperactivity subscales. Multiple regression analyses showed a corresponding pattern of results in the prediction of ABC subscale scores by BPI subscale scores. Exploratory factor analysis of the BPI data suggested a six-factor solution with an aggressive/destructive behavior factor, four factors relating to stereotypy, and one related to stereotypy and self-injury. These results, discussed with reference to Rojahn et al. [Rojahn, J., Aman, M. G., Matson, J. L., & Mayville, E. (2003). The aberrant behavior checklist and the behavior problems inventory: Convergent and divergent validity. Research in Developmental Disabilities, 24, 391-404], support the existence of relationships between specific subscales of the two instruments in addition to an overall association between total scores related to general severity of behavioral disturbance.
Object-oriented Bayesian networks for complex forensic DNA profiling problems.
Dawid, A P; Mortera, J; Vicard, P
2007-07-04
We describe a flexible computational toolkit, based on object-oriented Bayesian networks, that can be used to model and solve a wide variety of complex problems of relationship testing using DNA profiles. In particular this can account for such complicating features as missing individuals, mutation and null alleles. We illustrate the use of this toolkit with several examples, including disputed paternity with missing or additional measurements, and criminal identification. We investigate the effects on likelihood ratios of introducing mutation and/or null alleles, and show that this can be substantial even when the underlying perturbations are small.
The anatomical problem posed by brain complexity and size: a potential solution.
DeFelipe, Javier
2015-01-01
Over the years the field of neuroanatomy has evolved considerably but unraveling the extraordinary structural and functional complexity of the brain seems to be an unattainable goal, partly due to the fact that it is only possible to obtain an imprecise connection matrix of the brain. The reasons why reaching such a goal appears almost impossible to date is discussed here, together with suggestions of how we could overcome this anatomical problem by establishing new methodologies to study the brain and by promoting interdisciplinary collaboration. Generating a realistic computational model seems to be the solution rather than attempting to fully reconstruct the whole brain or a particular brain region.
The anatomical problem posed by brain complexity and size: a potential solution
DeFelipe, Javier
2015-01-01
Over the years the field of neuroanatomy has evolved considerably but unraveling the extraordinary structural and functional complexity of the brain seems to be an unattainable goal, partly due to the fact that it is only possible to obtain an imprecise connection matrix of the brain. The reasons why reaching such a goal appears almost impossible to date is discussed here, together with suggestions of how we could overcome this anatomical problem by establishing new methodologies to study the brain and by promoting interdisciplinary collaboration. Generating a realistic computational model seems to be the solution rather than attempting to fully reconstruct the whole brain or a particular brain region. PMID:26347617
A new approach to the solution of boundary value problems involving complex configurations
NASA Technical Reports Server (NTRS)
Rubbert, P. E.; Bussoletti, J. E.; Johnson, F. T.; Sidwell, K. W.; Rowe, W. S.; Samant, S. S.; Sengupta, G.; Weatherill, W. H.; Burkhart, R. H.; Woo, A. C.
1986-01-01
A new approach for solving certain types of boundary value problems about complex configurations is presented. Numerical algorithms from such diverse fields as finite elements, preconditioned Krylov subspace methods, discrete Fourier analysis, and integral equations are combined to take advantage of the memory, speed and architecture of current and emerging supercomputers. Although the approach has application to many branches of computational physics, the present effort is concentrated in areas of Computational Fluid Dynamics (CFD) such as steady nonlinear aerodynamics, time harmonic unsteady aerodynamics, and aeroacoustics. The most significant attribute of the approach is that it can handle truly arbitrary boundary geometries and eliminates the difficult task of generating surface fitted grids.
Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas
2014-12-10
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.
Karemere, Hermès; Ribesse, Nathalie; Marchal, Bruno; Macq, Jean
2015-01-01
This study deals with the adaptation of Katana referral hospital in Eastern Democratic Republic of Congo in a changing environment that is affected for more than a decade by intermittent armed conflicts. His objective is to generate theoretical proposals for addressing differently the analysis of hospitals governance in the aims to assess their performance and how to improve that performance. The methodology applied approach uses a case study using mixed methods ( qualitative and quantitative) for data collection. It uses (1) hospital data to measure the output of hospitals, (2) literature review to identify among others, events and interventions recorded in the history of hospital during the study period and (3) information from individual interviews to validate the interpretation of the results of the previous two sources of data and understand the responsiveness of management team referral hospital during times of change. The study brings four theoretical propositions: (1) Interaction between key agents is a positive force driving adaptation if the actors share a same vision, (2) The strength of the interaction between agents is largely based on the nature of institutional arrangements, which in turn are shaped by the actors themselves, (3) The owner and the management team play a decisive role in the implementation of effective institutional arrangements and establishment of positive interactions between agents, (4) The analysis of recipient population's perception of health services provided allow to better tailor and adapt the health services offer to the population's needs and expectations. Research shows that it isn't enough just to provide support (financial and technical), to manage a hospital for operate and adapt to a changing environment but must still animate, considering that it is a complex adaptive system and that this animation is nothing other than the induction of a positive interaction between agents.
The search for complex problem-solving strategies in the presence of stressors.
Van Hiel, Alain; Mervielde, Ivan
2007-12-01
The present research tests the effects of time pressure and noise on open-mindedness to discover new problem-solving strategies. We are primarily interested in transfer of skill from one phase to the next. More specifically, this study investigates whether the presence of stressors makes participants adhere to the sustained use of complex rules. Participants learned to apply a complex rule in the first phase of a category learning task. In the second phase, this rule became dysfunctional and participants had to search for a new categorization rule in order to assign the stimuli to the correct classes. Two experiments were set up to investigate this issue. Participants were found to have difficulty discovering a complex Phase 2 rule in the presence of stressors, whereas the discovery of a simple rule was not hindered by the presence of stressors. In the discussion, it is argued that the present results are compatible with previous research on stressors showing that time pressure and noise induce the application of simple strategies. The innovative finding here is that this simplification also occurs in individuals who are accustomed to using complex solutions. The implications of the present results for emergency response training are elaborated upon.
Komaromy, Miriam; Madden, Erin Fanning; Zurawski, Andrea; Kalishman, Summers; Barker, Kristin; O'Sullivan, Patricia; Jurado, Martin; Arora, Sanjeev
2017-09-01
Elicit patients' perceptions of factors that facilitate their engagement in care METHODS: In-depth interviews with 20 adult Medicaid patients who had complex health problems, frequent hospitalizations/emergency department use, and who were enrolled in an intensive, team-based care program designed to address medical, behavioral, and social needs. Prior to engaging in the program, participants described weak relationships with primary care providers, frequent hospitalizations and emergency visits, poor adherence to medications and severe social barriers to care. After participating in the program, participants identified key factors that enabled them to develop trust and engage with care including: availability for extended intensive interactions, a non-judgmental approach, addressing patients' material needs, and providing social contact for isolated patients. After developing relationships with their care team, participants described changes such as sustained interactions with their primary care team and incremental improvements in health behaviors. These findings illuminate factors promoting "contingent engagement" for low socio-economic status patients with complex health problems, which allow them to become proactive in ways commensurate with their circumstances, and offers insights for designing interventions to improve patient outcomes. For these patients, engagement is contingent on healthcare providers' efforts to develop trust and address patients' material needs. Copyright © 2017 Elsevier B.V. All rights reserved.
SVD-GFD scheme to simulate complex moving body problems in 3D space
NASA Astrophysics Data System (ADS)
Wang, X. Y.; Yu, P.; Yeo, K. S.; Khoo, B. C.
2010-03-01
The present paper presents a hybrid meshfree-and-Cartesian grid method for simulating moving body incompressible viscous flow problems in 3D space. The method combines the merits of cost-efficient and accurate conventional finite difference approximations on Cartesian grids with the geometric freedom of generalized finite difference (GFD) approximations on meshfree grids. Error minimization in GFD is carried out by singular value decomposition (SVD). The Arbitrary Lagrangian-Eulerian (ALE) form of the Navier-Stokes equations on convecting nodes is integrated by a fractional-step projection method. The present hybrid grid method employs a relatively simple mode of nodal administration. Nevertheless, it has the geometrical flexibility of unstructured mesh-based finite-volume and finite element methods. Boundary conditions are precisely implemented on boundary nodes without interpolation. The present scheme is validated by a moving patch consistency test as well as against published results for 3D moving body problems. Finally, the method is applied on low-Reynolds number flapping wing applications, where large boundary motions are involved. The present study demonstrates the potential of the present hybrid meshfree-and-Cartesian grid scheme for solving complex moving body problems in 3D.
Exploring the complexity of inquiry learning in an open-ended problem space
NASA Astrophysics Data System (ADS)
Clarke, Jody
Data-gathering and problem identification are key components of scientific inquiry. However, few researchers have studied how students learn these skills because historically this required a time-consuming, complicated method of capturing the details of learners' data-gathering processes. Nor are classroom settings authentic contexts in which students could exhibit problem identification skills parallel to those involved in deconstructing complex real world situations. In this study of middle school students, because of my access to an innovative technology, I simulated a disease outbreak in a virtual community as a complicated, authentic problem. As students worked through the curriculum in the virtual world, their time-stamped actions were stored by the computer in event-logs. Using these records, I tracked in detail how the student scientists made sense of the complexity they faced and how they identified and investigated the problem using science-inquiry skills. To describe the degree to which students' data collection narrowed and focused on a specific disease over time, I developed a rubric and automated the coding of records in the event-logs. I measured the ongoing development of the students' "systematicity" in investigating the disease outbreak. I demonstrated that coding event-logs is an effective yet non-intrusive way of collecting and parsing detailed information about students' behaviors in real time in an authentic setting. My principal research question was "Do students who are more thoughtful about their inquiry prior to entry into the curriculum demonstrate increased systematicity in their inquiry behavior during the experience, by narrowing the focus of their data-gathering more rapidly than students who enter with lower levels of thoughtfulness about inquiry?" My sample consisted of 403 middle-school students from public schools in the US who volunteered to participate in the River City Project in spring 2008. Contrary to my hypothesis, I found
Álvarez-Campos, Patricia; Giribet, Gonzalo; Riesgo, Ana
2017-04-01
Syllis gracilis is an emblematic member of the subfamily Syllinae (Syllidae, Annelida), which inhabits shallow, temperate coastal waters and can be found on algae, coral rubble, and sponges. Their distinctive ypsiloid chaetae, usually found in specimens from populations all around the world, led to the consideration of the species as cosmopolitan, even though four other species have similar chaetae: Syllis magellanica, S. picta, S. mayeri and S. ypsiloides. The discovery of deeply divergent lineages in the Mediterranean Sea, that were morphologically similar, questioned the cosmopolitanism of S. gracilis and suggested the possibility of it being a species complex. In order to assess the speciation patterns within the putative S. gracilis complex, we undertook species delimitation and phylogenetic analyses on 61 specimens morphologically ascribed to Syllis gracilis and closely related species using a multilocus molecular dataset (two mitochondrial and two nuclear markers). Our results suggest high levels of genetic differentiation between the S. gracilis populations analyzed, some of which have morphologically distinctive features. Five to eight distinct lineages (depending on the analysis) were identified, all with geographically restricted distributions. Although the presence of ypsiloid chaetae has been traditionally considered the main character to identify S. gracilis, we conclude that this feature is homoplastic. Instead, we propose that characters such as the degree of fusion of blades and shafts in chaetae, the morphology of the posterior chaetae or the animal color pattern should be considered to differentiate lineages within the S. gracilis species complex. Our study does not support the cosmopolitanism of S. gracilis, and instead provides morphological and molecular evidence of the existence of a complex of pseudo-cryptic species.
Perfect absorption in Schrödinger-like problems using non-equidistant complex grids
NASA Astrophysics Data System (ADS)
Weinmüller, Markus; Weinmüller, Michael; Rohland, Jonathan; Scrinzi, Armin
2017-03-01
Two non-equidistant grid implementations of infinite range exterior complex scaling are introduced that allow for perfect absorption in the time dependent Schrödinger equation. Finite element discrete variables discretizations provide as efficient absorption as the corresponding finite elements discretizations. This finding is at variance with results reported in literature [L. Tao et al., Phys. Rev. A 48, 063419 (2009)]. For finite differences, a new class of generalized Q-point schemes for non-equidistant grids is derived. Convergence of absorption is exponential ∼ Δx Q - 1 and numerically robust. Local relative errors ≲10-9 are achieved in a standard problem of strong-field ionization.
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
Class II major histocompatibility complex tetramer staining: progress, problems, and prospects
Vollers, Sabrina S; Stern, Lawrence J
2008-01-01
The use of major histocompatibility complex (MHC) tetramers in the detection and analysis of antigen-specific T cells has become more widespread since its introduction 11 years ago. Early challenges in the application of tetramer staining to CD4+ T cells centred around difficulties in the expression of various class II MHC allelic variants and the detection of low-frequency T cells in mixed populations. As many of the technical obstacles to class II MHC tetramer staining have been overcome, the focus has returned to uncertainties concerning how oligomer valency and T-cell receptor/MHC affinity affect tetramer binding. Such issues have become more important with an increase in the number of studies relying on direct ex vivo analysis of antigen-specific CD4+ T cells. In this review we discuss which problems in class II MHC tetramer staining have been solved to date, and which matters remain to be considered. PMID:18251991
Diagnostic and management problems in a complex case of connective tissue disease.
Yeap, S S; Deighton, C M; Powell, R J; Read, R C; Finch, R G
1995-12-01
A 28-year-old Nigerian woman presented with persistent pyrexia, marked pruritus, eosinophilia, myalgias, flitting arthralgias, serositis and massive splenomegaly. Intensive investigation for an infective or neoplastic aetiology proved negative. Empirical treatment for helminthic infections and tuberculosis was unhelpful. Although there were no specific clues to suggest an underlying connective tissue disease, a trial of steriods and azathioprine was introduced, with no obvious response. Her condition deteriorated to a point where it was decided that intravenous immunosuppressive therapy was needed and subsequently, her condition improved remarkably. This patient illustrates the problems in the diagnosis and management of complex disorders, particularly when classical tests for connective tissue diseases are absent. Also, we would like to report that marked pruritus can be associated with connective tissue disease.
Diagnostic and management problems in a complex case of connective tissue disease.
Yeap, S. S.; Deighton, C. M.; Powell, R. J.; Read, R. C.; Finch, R. G.
1995-01-01
A 28-year-old Nigerian woman presented with persistent pyrexia, marked pruritis, eosinophilia, myalgias, flitting arthralgias, serositis and massive splenomegaly. Intensive investigation for an infective or neoplastic aetiology proved negative. Empirical treatment for helminthic infections and tuberculosis was unhelpful. Although there were no specific clues to suggest an underlying connective tissue disease, a trial of steriods and azathioprine was introduced, with no obvious response. Her condition deteriorated to a point where it was decided that intravenous immunosuppressive therapy was needed and subsequently, her condition improved remarkably. This patient illustrates the problems in the diagnosis and management of complex disorders, particularly when classical tests for connective tissue diseases are absent. Also, we would like to report that marked pruritis can be associated with connective tissue disease. PMID:8552544
Communication: overcoming the root search problem in complex quantum trajectory calculations.
Zamstein, Noa; Tannor, David J
2014-01-28
Three new developments are presented regarding the semiclassical coherent state propagator. First, we present a conceptually different derivation of Huber and Heller's method for identifying complex root trajectories and their equations of motion [D. Huber and E. J. Heller, J. Chem. Phys. 87, 5302 (1987)]. Our method proceeds directly from the time-dependent Schrödinger equation and therefore allows various generalizations of the formalism. Second, we obtain an analytic expression for the semiclassical coherent state propagator. We show that the prefactor can be expressed in a form that requires solving significantly fewer equations of motion than in alternative expressions. Third, the semiclassical coherent state propagator is used to formulate a final value representation of the time-dependent wavefunction that avoids the root search, eliminates problems with caustics and automatically includes interference. We present numerical results for the 1D Morse oscillator showing that the method may become an attractive alternative to existing semiclassical approaches.
McDonald, Ruth
2014-10-01
There is a trend in health systems around the world to place great emphasis on and faith in improving 'leadership'. Leadership has been defined in many ways and the elitist implications of traditional notions of leadership sit uncomfortably with modern healthcare organisations. The concept of distributed leadership incorporates inclusivity, collectiveness and collaboration, with the result that, to some extent, all staff, not just those in senior management roles, are viewed as leaders. Leadership development programmes are intended to equip individuals to improve leadership skills, but we know little about their effectiveness. Furthermore, the content of these programmes varies widely and the fact that many lack a sense of how they fit with individual or organisational goals raises questions about how they are intended to achieve their aims. It is important to avoid simplistic assumptions about the ability of improved leadership to solve complex problems. It is also important to evaluate leadership development programmes in ways that go beyond descriptive accounts.
Communication: Overcoming the root search problem in complex quantum trajectory calculations
Zamstein, Noa; Tannor, David J.
2014-01-28
Three new developments are presented regarding the semiclassical coherent state propagator. First, we present a conceptually different derivation of Huber and Heller's method for identifying complex root trajectories and their equations of motion [D. Huber and E. J. Heller, J. Chem. Phys. 87, 5302 (1987)]. Our method proceeds directly from the time-dependent Schrödinger equation and therefore allows various generalizations of the formalism. Second, we obtain an analytic expression for the semiclassical coherent state propagator. We show that the prefactor can be expressed in a form that requires solving significantly fewer equations of motion than in alternative expressions. Third, the semiclassical coherent state propagator is used to formulate a final value representation of the time-dependent wavefunction that avoids the root search, eliminates problems with caustics and automatically includes interference. We present numerical results for the 1D Morse oscillator showing that the method may become an attractive alternative to existing semiclassical approaches.
Addressing complex healthcare problems in diverse settings: insights from activity theory.
Greig, Gail; Entwistle, Vikki A; Beech, Nic
2012-02-01
In the U.K., approaches to policy implementation, service improvement and quality assurance treat policy, management and clinical care as separate, hierarchical domains. They are often based on the central knowledge transfer (KT) theory idea that best practice solutions to complex problems can be identified and 'rolled out' across organisations. When the designated 'best practice' is not implemented, this is interpreted as local--particularly management--failure. Remedial actions include reiterating policy aims and tightening performance management of solution implementation, frequently to no avail. We propose activity theory (AT) as an alternative approach to identifying and understanding the challenges of addressing complex healthcare problems across diverse settings. AT challenges the KT conceptual separations between levels of policy, management and clinical care. It does not regard knowledge and practice as separable, and does not understand them in the commodified way that has typified some versions of KT theory. Instead, AT focuses on "objects of activity" which can be contested. It sees new practice as emerging from contradiction and understands knowledge and practice as fundamentally entwined, not separate. From an AT perspective, there can be no single best practice. The contributions of AT are that it enables us to understand the dynamics of knowledge-practice in activities rather than between levels. It shows how efforts to reduce variation from best practice may paradoxically remove a key source of practice improvement. After explaining the principles of AT we illustrate its explanatory potential through an ethnographic study of primary healthcare teams responding to a policy aim of reducing inappropriate hospital admissions of older people by the 'best practice' of rapid response teams. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Winkel, Brian
2008-01-01
A complex technology-based problem in visualization and computation for students in calculus is presented. Strategies are shown for its solution and the opportunities for students to put together sequences of concepts and skills to build for success are highlighted. The problem itself involves placing an object under water in order to actually see…
ERIC Educational Resources Information Center
Angeli, Charoula; Valanides, Nicos
2013-01-01
The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…
ERIC Educational Resources Information Center
Winkel, Brian
2008-01-01
A complex technology-based problem in visualization and computation for students in calculus is presented. Strategies are shown for its solution and the opportunities for students to put together sequences of concepts and skills to build for success are highlighted. The problem itself involves placing an object under water in order to actually see…
ERIC Educational Resources Information Center
Angeli, Charoula; Valanides, Nicos
2013-01-01
The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…
ERIC Educational Resources Information Center
Eseryel, Deniz; Law, Victor; Ifenthaler, Dirk; Ge, Xun; Miller, Raymond
2014-01-01
Digital game-based learning, especially massively multiplayer online games, has been touted for its potential to promote student motivation and complex problem-solving competency development. However, current evidence is limited to anecdotal studies. The purpose of this empirical investigation is to examine the complex interplay between learners'…
ERIC Educational Resources Information Center
Eseryel, Deniz; Law, Victor; Ifenthaler, Dirk; Ge, Xun; Miller, Raymond
2014-01-01
Digital game-based learning, especially massively multiplayer online games, has been touted for its potential to promote student motivation and complex problem-solving competency development. However, current evidence is limited to anecdotal studies. The purpose of this empirical investigation is to examine the complex interplay between learners'…
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
An immersed boundary computational model for acoustic scattering problems with complex geometries.
Sun, Xiaofeng; Jiang, Yongsong; Liang, An; Jing, Xiaodong
2012-11-01
An immersed boundary computational model is presented in order to deal with the acoustic scattering problem by complex geometries, in which the wall boundary condition is treated as a direct body force determined by satisfying the non-penetrating boundary condition. Two distinct discretized grids are used to discrete the fluid domain and immersed boundary, respectively. The immersed boundaries are represented by Lagrangian points and the direct body force determined on these points is applied on the neighboring Eulerian points. The coupling between the Lagrangian points and Euler points is linked by a discrete delta function. The linearized Euler equations are spatially discretized with a fourth-order dispersion-relation-preserving scheme and temporal integrated with a low-dissipation and low-dispersion Runge-Kutta scheme. A perfectly matched layer technique is applied to absorb out-going waves and in-going waves in the immersed bodies. Several benchmark problems for computational aeroacoustic solvers are performed to validate the present method.
Fibromyalgia and disability adjudication: No simple solutions to a complex problem
Harth, Manfred; Nielson, Warren R
2014-01-01
BACKGROUND: Adjudication of disability claims related to fibromyalgia (FM) syndrome can be a challenging and complex process. A commentary published in the current issue of Pain Research & Management makes suggestions for improvement. The authors of the commentary contend that: previously and currently used criteria for the diagnosis of FM are irrelevant to clinical practice; the opinions of family physicians should supersede those of experts; there is little evidence that trauma can cause FM; no formal instruments are necessary to assess disability; and many FM patients on or applying for disability are exaggerating or malingering, and tests of symptoms validity should be used to identify malingerers. OBJECTIVES: To assess the assertions made by Fitzcharles et al. METHODS: A narrative review of the available research literature was performed. RESULTS: Available diagnostic criteria should be used in a medicolegal context; family physicians are frequently uncertain about FM and/or biased; there is considerable evidence that trauma can be a cause of FM; it is essential to use validated instruments to assess functional impairment; and the available tests of physical effort and symptom validity are of uncertain value in identifying malingering in FM. CONCLUSIONS: The available evidence does not support many of the suggestions presented in the commentary. Caution is advised in adopting simple solutions for disability adjudication in FM because they are generally incompatible with the inherently complex nature of the problem. PMID:25479149
Characteristics of fluent skills in a complex, dynamic problem-solving task.
Sohn, Myeong-Ho; Douglass, Scott A; Chen, Mon-Chu; Anderson, John R
2005-01-01
We examined critical characteristics of fluent cognitive skills, using the Georgia Tech Aegis Simulation Program, a tactical decision-making computer game that simulates tasks of an anti-air-warfare coordinator. To characterize learning, we adopted the unit-task analysis framework, in which a task is decomposed into several unit tasks that are further decomposed into functional-level subtasks. Our results showed that learning at a global level could be decomposed into learning smaller component tasks. Further, most learning was associated with a reduction in cognitive processes, in which people make inferences from the currently available information. Eye-movement data also revealed that the time spent on task-irrelevant regions of the display decreased more than did the time spent on task-relevant regions. In sum, although fluency in dynamic, complex problem solving was achieved by attaining efficiency in perceptual, motor, and cognitive processes, the magnitude of the gains depended on the preexisting fluency of the component skills. These results imply that a training program should decompose a task into its component skills and emphasize those components with which trainees have relatively little prior experience. Actual or potential applications of this research include learning and training of complex tasks as well as evaluation of performance on those tasks.
Decision Analysis for Environmental Problems
Environmental management problems are often complex and uncertain. A formal process with proper guidance is needed to understand the issues, identify sources of disagreement, and analyze the major uncertainties in environmental problems. This course will present a process that fo...
Decision Analysis for Environmental Problems
Environmental management problems are often complex and uncertain. A formal process with proper guidance is needed to understand the issues, identify sources of disagreement, and analyze the major uncertainties in environmental problems. This course will present a process that fo...
NASA Astrophysics Data System (ADS)
Oches, E. A.; Szymanski, D. W.; Snyder, B.; Gulati, G. J.; Davis, P. T.
2012-12-01
The highly interdisciplinary nature of sustainability presents pedagogic challenges when sustainability concepts are incorporated into traditional disciplinary courses. At Bentley University, where over 90 percent of students major in business disciplines, we have created a multidisciplinary course module centered on corn ethanol that explores a complex social, environmental, and economic problem and develops basic data analysis and analytical thinking skills in several courses spanning the natural, physical, and social sciences within the business curriculum. Through an NSF-CCLI grant, Bentley faculty from several disciplines participated in a summer workshop to define learning objectives, create course modules, and develop an assessment plan to enhance interdisciplinary sustainability teaching. The core instructional outcome was a data-rich exercise for all participating courses in which students plot and analyze multiple parameters of corn planted and harvested for various purposes including food (human), feed (animal), ethanol production, and commodities exchanged for the years 1960 to present. Students then evaluate patterns and trends in the data and hypothesize relationships among the plotted data and environmental, social, and economic drivers, responses, and unintended consequences. After the central data analysis activity, students explore corn ethanol production as it relates to core disciplinary concepts in their individual classes. For example, students in Environmental Chemistry produce ethanol using corn and sugar as feedstocks and compare the efficiency of each process, while learning about enzymes, fermentation, distillation, and other chemical principles. Principles of Geology students examine the effects of agricultural runoff on surface water quality associated with extracting greater agricultural yield from mid-continent croplands. The American Government course examines the role of political institutions, the political process, and various
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Lung, Shun-fat
2009-01-01
Modern airplane design is a multidisciplinary task which combines several disciplines such as structures, aerodynamics, flight controls, and sometimes heat transfer. Historically, analytical and experimental investigations concerning the interaction of the elastic airframe with aerodynamic and in retia loads have been conducted during the design phase to determine the existence of aeroelastic instabilities, so called flutter .With the advent and increased usage of flight control systems, there is also a likelihood of instabilities caused by the interaction of the flight control system and the aeroelastic response of the airplane, known as aeroservoelastic instabilities. An in -house code MPASES (Ref. 1), modified from PASES (Ref. 2), is a general purpose digital computer program for the analysis of the closed-loop stability problem. This program used subroutines given in the International Mathematical and Statistical Library (IMSL) (Ref. 3) to compute all of the real and/or complex conjugate pairs of eigenvalues of the Hessenberg matrix. For high fidelity configuration, these aeroelastic system matrices are large and compute all eigenvalues will be time consuming. A subspace iteration method (Ref. 4) for complex eigenvalues problems with nonsymmetric matrices has been formulated and incorporated into the modified program for aeroservoelastic stability (MPASES code). Subspace iteration method only solve for the lowest p eigenvalues and corresponding eigenvectors for aeroelastic and aeroservoelastic analysis. In general, the selection of p is ranging from 10 for wing flutter analysis to 50 for an entire aircraft flutter analysis. The application of this newly incorporated code is an experiment known as the Aerostructures Test Wing (ATW) which was designed by the National Aeronautic and Space Administration (NASA) Dryden Flight Research Center, Edwards, California to research aeroelastic instabilities. Specifically, this experiment was used to study an instability
McKone, Thomas E.; Deshpande, Ashok W.
2004-06-14
In modeling complex environmental problems, we often fail to make precise statements about inputs and outcome. In this case the fuzzy logic method native to the human mind provides a useful way to get at these problems. Fuzzy logic represents a significant change in both the approach to and outcome of environmental evaluations. Risk assessment is currently based on the implicit premise that probability theory provides the necessary and sufficient tools for dealing with uncertainty and variability. The key advantage of fuzzy methods is the way they reflect the human mind in its remarkable ability to store and process information which is consistently imprecise, uncertain, and resistant to classification. Our case study illustrates the ability of fuzzy logic to integrate statistical measurements with imprecise health goals. But we submit that fuzzy logic and probability theory are complementary and not competitive. In the world of soft computing, fuzzy logic has been widely used and has often been the ''smart'' behind smart machines. But it will require more effort and case studies to establish its niche in risk assessment or other types of impact assessment. Although we often hear complaints about ''bright lines,'' could we adapt to a system that relaxes these lines to fuzzy gradations? Would decision makers and the public accept expressions of water or air quality goals in linguistic terms with computed degrees of certainty? Resistance is likely. In many regions, such as the US and European Union, it is likely that both decision makers and members of the public are more comfortable with our current system in which government agencies avoid confronting uncertainties by setting guidelines that are crisp and often fail to communicate uncertainty. But some day perhaps a more comprehensive approach that includes exposure surveys, toxicological data, epidemiological studies coupled with fuzzy modeling will go a long way in resolving some of the conflict, divisiveness
NASA Astrophysics Data System (ADS)
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
Speed and complexity characterize attention problems in children with localization-related epilepsy.
Berl, Madison M; Terwilliger, Virginia; Scheller, Alexandra; Sepeta, Leigh; Walkowiak, Jenifer; Gaillard, William D
2015-06-01
Children with epilepsy (EPI) have a higher rate of attention-deficit/hyperactivity disorder (ADHD; 28-70%) than typically developing (TD) children (5-10%); however, attention is multidimensional. Thus, we aimed to characterize the profile of attention difficulties in children with epilepsy. Seventy-five children with localization-related epilepsy ages 6-16 years and 75 age-matched controls were evaluated using multimodal, multidimensional measures of attention including direct performance and parent ratings of attention as well as intelligence testing. We assessed group differences across attention measures, determined if parent rating predicted performance on attention measures, and examined if epilepsy characteristics were associated with attention skills. The EPI group performed worse than the TD group on timed and complex attention aspects of attention (p < 0.05), whereas performance on simple visual and simple auditory attention tasks was comparable. Children with EPI were 12 times as likely as TD children to have clinically elevated symptoms of inattention as rated by parents, but ratings were a weak predictor of attention performance. Earlier age of onset was associated with slower motor speed (p < 0.01), but no other epilepsy-related clinical characteristics were associated with attention skills. This study clarifies the nature of the attention problems in pediatric epilepsy, which may be under-recognized. Children with EPI had difficulty with complex attention and rapid response, not simple attention. As such, they may not exhibit difficulty until later in primary school when demands increase. Parent report with standard ADHD screening tools may under-detect these higher-order attention difficulties. Thus, monitoring through direct neuropsychological performance is recommended. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.
What Does (and Doesn't) Make Analogical Problem Solving Easy? A Complexity-Theoretic Perspective
ERIC Educational Resources Information Center
Wareham, Todd; Evans, Patricia; van Rooij, Iris
2011-01-01
Solving new problems can be made easier if one can build on experiences with other problems one has already successfully solved. The ability to exploit earlier problem-solving experiences in solving new problems seems to require several cognitive sub-abilities. Minimally, one needs to be able to retrieve relevant knowledge of earlier solved…
What Does (and Doesn't) Make Analogical Problem Solving Easy? A Complexity-Theoretic Perspective
ERIC Educational Resources Information Center
Wareham, Todd; Evans, Patricia; van Rooij, Iris
2011-01-01
Solving new problems can be made easier if one can build on experiences with other problems one has already successfully solved. The ability to exploit earlier problem-solving experiences in solving new problems seems to require several cognitive sub-abilities. Minimally, one needs to be able to retrieve relevant knowledge of earlier solved…
Managing the Complexity of Design Problems through Studio-Based Learning
ERIC Educational Resources Information Center
Cennamo, Katherine; Brandt, Carol; Scott, Brigitte; Douglas, Sarah; McGrath, Margarita; Reimer, Yolanda; Vernon, Mitzi
2011-01-01
The ill-structured nature of design problems makes them particularly challenging for problem-based learning. Studio-based learning (SBL), however, has much in common with problem-based learning and indeed has a long history of use in teaching students to solve design problems. The purpose of this ethnographic study of an industrial design class,…
Managing the Complexity of Design Problems through Studio-Based Learning
ERIC Educational Resources Information Center
Cennamo, Katherine; Brandt, Carol; Scott, Brigitte; Douglas, Sarah; McGrath, Margarita; Reimer, Yolanda; Vernon, Mitzi
2011-01-01
The ill-structured nature of design problems makes them particularly challenging for problem-based learning. Studio-based learning (SBL), however, has much in common with problem-based learning and indeed has a long history of use in teaching students to solve design problems. The purpose of this ethnographic study of an industrial design class,…
McDonald, Ruth
2014-01-01
There is a trend in health systems around the world to place great emphasis on and faith in improving ‘leadership’. Leadership has been defined in many ways and the elitist implications of traditional notions of leadership sit uncomfortably with modern healthcare organisations. The concept of distributed leadership incorporates inclusivity, collectiveness and collaboration, with the result that, to some extent, all staff, not just those in senior management roles, are viewed as leaders. Leadership development programmes are intended to equip individuals to improve leadership skills, but we know little about their effectiveness. Furthermore, the content of these programmes varies widely and the fact that many lack a sense of how they fit with individual or organisational goals raises questions about how they are intended to achieve their aims. It is important to avoid simplistic assumptions about the ability of improved leadership to solve complex problems. It is also important to evaluate leadership development programmes in ways that go beyond descriptive accounts. PMID:25337595
Problem of phase transitions and thermodynamic stability in complex (dusty, colloid etc) plasmas
NASA Astrophysics Data System (ADS)
Martynova, I. A.; Iosilevskiy, I. L.
2016-11-01
Features of the first-order phase transitions in complex (dusty, colloid etc) plasma are under discussion. The basis for consideration is the well-known phase diagram of dusty plasma as a Debye system from Hamaguchi et al (1997 Phys. Rev. E 92 4671) in Γ-κ plane (Γ is a Coulomb non-ideality parameter, κ is a screening parameter). The initial Γ -κ phase diagram from Hamaguchi et al (1997 Phys. Rev. E 92 4671) is converted in standard thermodynamic variables in temperature-density planes. Here 2-component electroneutral systems of macro- and microions (+Z, -1) and (-Z, +1) are considered as thermodynamically equilibrium ensembles of classical Coulomb particles. An extensive area for negative compressibility of the system was revealed at the phase diagram in a fluid state of the initial Debye system when one considers the system as equilibrium two-component electroneutral mixture of macro- and microions (+Z, -1) (or (-Z, +1)) under equations of state from Hamaguchi et al (1997 Phys. Rev. E 92 4671) and Khrapak et al (2014 Phys. Rev. E 89 023102). This means thermodynamic instability of the simplified Debye system in this domain. Non-linear screening and an unavoidable existence of additional phase transitions of gas-liquid and gas-crystal type are proposed as hypothetical resolution of discussed thermodynamic instability problem.
Richter, P; Hinton, J W; Reinhold, S
1998-11-01
Following Hinton et al. (1992, Biol. Psychol. 33, 63-71) and Richter et al. (1995, Biol. Psychol. 39, 131-142) ionic concentration of [K+] in unstimulated saliva was predicted to rise with perceived challenge, while lowered [Na+] was expected when experiencing psychological stress (PS). Subjects had to learn an engaging complex problem-solving 'game', via positive and negative feed-back on three 'games' lasting 2.5-3.0 h overall. Comparisons were made between three groups: (1) high success; (2) partial success ('strugglers'); and (3) total failure to learn. Saliva was sampled after resting and after each of three 'games'. Successful learners had a significant rise in [K+] on the first 'game' followed by a significant fall, consistent with task-challenge reaction followed by fast autonomic adaptation with successful learning. The 'strugglers' [Na+] fell significantly over the 'games', indicating mineralocorticoid-induced PS response of Na+ reabsorption. The 'total failure' subjects had generally significantly higher [K+] than the successful ones, showing raised tonic sympathetic relative to parasympathetic activity--this outcome being interpreted from interference theories. The 'failures' also had significantly higher tonic [Na+] on 'games'--indicating low PS as predicted from McGrath's (1976) theory.
McMahon, Michelle A; Christopher, Kimberly A
2011-08-19
As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students.
NASA Technical Reports Server (NTRS)
Miller, L. L.
1968-01-01
Welder analyzer circuit evaluates and certifies resistance welding machines. The analyzer measures peak current, peak voltage, peak power, total energy, and first-pulse energy. It is used as an energy monitor while welding is being performed, or a precision shunt load for a pure electrical evaluation of the weld machine.
NASA Astrophysics Data System (ADS)
Urmanov, A. M.; Gribok, A. V.; Bozdogan, H.; Hines, J. W.; Uhrig, R. E.
2002-04-01
We propose an information complexity-based regularization parameter selection method for solution of ill conditioned inverse problems. The regularization parameter is selected to be the minimizer of the Kullback-Leibler (KL) distance between the unknown data-generating distribution and the fitted distribution. The KL distance is approximated by an information complexity criterion developed by Bozdogan. The method is not limited to the white Gaussian noise case. It can be extended to correlated and non-Gaussian noise. It can also account for possible model misspecification. We demonstrate the performance of the proposed method on a test problem from Hansen's regularization tools.
World, We Have Problems: Simulation for Large Complex, Risky Projects, and Events
NASA Technical Reports Server (NTRS)
Elfrey, Priscilla
2010-01-01
Prior to a spacewalk during the NASA STS/129 mission in November 2009, Columbia Broadcasting System (CBS) correspondent William Harwood reported astronauts, "were awakened again", as they had been the day previously. Fearing something not properly connected was causing a leak, the crew, both on the ground and in space, stopped and checked everything. The alarm proved false. The crew did complete its work ahead of schedule, but the incident reminds us that correctly connecting hundreds and thousands of entities, subsystems and systems, finding leaks, loosening stuck valves, and adding replacements to very large complex systems over time does not occur magically. Everywhere major projects present similar pressures. Lives are at - risk. Responsibility is heavy. Large natural and human-created disasters introduce parallel difficulties as people work across boundaries their countries, disciplines, languages, and cultures with known immediate dangers as well as the unexpected. NASA has long accepted that when humans have to go where humans cannot go that simulation is the sole solution. The Agency uses simulation to achieve consensus, reduce ambiguity and uncertainty, understand problems, make decisions, support design, do planning and troubleshooting, as well as for operations, training, testing, and evaluation. Simulation is at the heart of all such complex systems, products, projects, programs, and events. Difficult, hazardous short and, especially, long-term activities have a persistent need for simulation from the first insight into a possibly workable idea or answer until the final report perhaps beyond our lifetime is put in the archive. With simulation we create a common mental model, try-out breakdowns of machinery or teamwork, and find opportunity for improvement. Lifecycle simulation proves to be increasingly important as risks and consequences intensify. Across the world, disasters are increasing. We anticipate more of them, as the results of global warming
Sorensen, E.G.; Gordon, C.M.
1959-02-10
Improvements in analog eomputing machines of the class capable of evaluating differential equations, commonly termed differential analyzers, are described. In general form, the analyzer embodies a plurality of basic computer mechanisms for performing integration, multiplication, and addition, and means for directing the result of any one operation to another computer mechanism performing a further operation. In the device, numerical quantities are represented by the rotation of shafts, or the electrical equivalent of shafts.
Untangling the Complex Needs of People Experiencing Gambling Problems and Homelessness
ERIC Educational Resources Information Center
Holdsworth, Louise; Tiyce, Margaret
2013-01-01
People with gambling problems are now recognised among those at increased risk of homelessness, and the link between housing and gambling problems has been identified as an area requiring further research. This paper discusses the findings of a qualitative study that explored the relationship between gambling problems and homelessness. Interviews…
Untangling the Complex Needs of People Experiencing Gambling Problems and Homelessness
ERIC Educational Resources Information Center
Holdsworth, Louise; Tiyce, Margaret
2013-01-01
People with gambling problems are now recognised among those at increased risk of homelessness, and the link between housing and gambling problems has been identified as an area requiring further research. This paper discusses the findings of a qualitative study that explored the relationship between gambling problems and homelessness. Interviews…
A framework to approach problems of forensic anthropology using complex networks
NASA Astrophysics Data System (ADS)
Caridi, Inés; Dorso, Claudio O.; Gallo, Pablo; Somigliana, Carlos
2011-05-01
We have developed a method to analyze and interpret emerging structures in a set of data which lacks some information. It has been conceived to be applied to the problem of getting information about people who disappeared in the Argentine state of Tucumán from 1974 to 1981. Even if the military dictatorship formally started in Argentina had begun in 1976 and lasted until 1983, the disappearance and assassination of people began some months earlier. During this period several circuits of Illegal Detention Centres (IDC) were set up in different locations all over the country. In these secret centres, disappeared people were illegally kept without any sort of constitutional guarantees, and later assassinated. Even today, the final destination of most of the disappeared people’s remains is still unknown. The fundamental hypothesis in this work is that a group of people with the same political affiliation whose disappearances were closely related in time and space shared the same place of captivity (the same IDC or circuit of IDCs). This hypothesis makes sense when applied to the systematic method of repression and disappearances which was actually launched in Tucumán, Argentina (2007) [11]. In this work, the missing individuals are identified as nodes on a network and connections are established among them based on the individuals’ attributes while they were alive, by using rules to link them. In order to determine which rules are the most effective in defining the network, we use other kind of knowledge available in this problem: previous results from the anthropological point of view (based on other sources of information, both oral and written, historical and anthropological data, etc.); and information about the place (one or more IDCs) where some people were kept during their captivity. For these best rules, a prediction about these people’s possible destination is assigned (one or more IDCs where they could have been kept), and the success of the
Kautenburger, Ralf; Hein, Christina; Sander, Jonas M; Beck, Horst P
2014-03-13
The complexation behavior of Aldrich humic acid (AHA) and a modified humic acid (AHA-PB) with blocked phenolic hydroxyl groups for trivalent lanthanides (Ln) is compared, and their influence on the mobility of Ln(III) in an aquifer is analyzed. As speciation technique, capillary electrophoresis (CE) was hyphenated with inductively coupled plasma mass spectrometry (ICP-MS). For metal loading experiments 25 mg L(-1) of AHA and different concentrations (cLn(Eu+Gd)=100-6000 μg L(-1)) of Eu(III) and Gd(III) in 10mM NaClO4 at pH 5 were applied. By CE-ICP-MS, three Ln-fractions, assumed to be uncomplexed, weakly and strongly AHA-complexed metal can be detected. For the used Ln/AHA-ratios conservative complex stability constants log βLnAHA decrease from 6.33 (100 μg L(-1) Ln(3+)) to 4.31 (6000 μg L(-1) Ln(3+)) with growing Ln-content. In order to verify the postulated weaker and stronger humic acid binding sites for trivalent Eu and Gd, a modified AHA with blocked functional groups was used. For these experiments 500 μg L(-1) Eu and 25 mg L(-1) AHA and AHA-PB in 10mM NaClO4 at pH-values ranging from 3 to 10 have been applied. With AHA-PB, where 84% of the phenolic OH-groups and 40% of the COOH-groups were blocked, Eu complexation was significantly lower, especially at the strong binding sites. The log β-values decrease from 6.11 (pH 10) to 5.61 at pH 3 (AHA) and for AHA-PB from 6.01 (pH 7) to 3.94 at pH 3. As a potential consequence, particularly humic acids with a high amount of strong binding sites (e.g. phenolic OH- and COOH-groups) can be responsible for a higher metal mobility in the aquifer due to the formation of dissolved negatively charged metal-humate species. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Smith, Jr., Everett V.; Kulikowich, Jonna M.
2004-01-01
This study describes the use of generalizability theory (GT) and many-facet Rasch measurement (MFRM) to evaluate psychometric properties of responses obtained from an assessment designed to measure complex problem-solving skills. The assessment revolved around the school activity of kickball. The task required of each student was to decide on a…
ERIC Educational Resources Information Center
Muis, Krista R.; Psaradellis, Cynthia; Chevrier, Marianne; Di Leo, Ivana; Lajoie, Susanne P.
2016-01-01
We developed an intervention based on the learning by teaching paradigm to foster self-regulatory processes and better learning outcomes during complex mathematics problem solving in a technology-rich learning environment. Seventy-eight elementary students were randomly assigned to 1 of 2 conditions: learning by preparing to teach, or learning for…
ERIC Educational Resources Information Center
Muis, Krista R.; Psaradellis, Cynthia; Chevrier, Marianne; Di Leo, Ivana; Lajoie, Susanne P.
2016-01-01
We developed an intervention based on the learning by teaching paradigm to foster self-regulatory processes and better learning outcomes during complex mathematics problem solving in a technology-rich learning environment. Seventy-eight elementary students were randomly assigned to 1 of 2 conditions: learning by preparing to teach, or learning for…
ERIC Educational Resources Information Center
Mainert, Jakob; Kretzschmar, André; Neubert, Jonas C.; Greiff, Samuel
2015-01-01
Transversal skills, such as complex problem solving (CPS) are viewed as central twenty-first-century skills. Recent empirical findings have already supported the importance of CPS for early academic advancement. We wanted to determine whether CPS could also contribute to the understanding of career advancement later in life. Towards this end, we…
ERIC Educational Resources Information Center
Schweizer, Fabian; Wustenberg, Sascha; Greiff, Samuel
2013-01-01
This study examines the validity of the complex problem solving (CPS) test MicroDYN by investigating a) the relation between its dimensions--rule identification (exploration strategy), rule knowledge (acquired knowledge), rule application (control performance)--and working memory capacity (WMC), and b) whether CPS predicts school grades in…
ERIC Educational Resources Information Center
Mainert, Jakob; Kretzschmar, André; Neubert, Jonas C.; Greiff, Samuel
2015-01-01
Transversal skills, such as complex problem solving (CPS) are viewed as central twenty-first-century skills. Recent empirical findings have already supported the importance of CPS for early academic advancement. We wanted to determine whether CPS could also contribute to the understanding of career advancement later in life. Towards this end, we…
ERIC Educational Resources Information Center
Schweizer, Fabian; Wustenberg, Sascha; Greiff, Samuel
2013-01-01
This study examines the validity of the complex problem solving (CPS) test MicroDYN by investigating a) the relation between its dimensions--rule identification (exploration strategy), rule knowledge (acquired knowledge), rule application (control performance)--and working memory capacity (WMC), and b) whether CPS predicts school grades in…
ERIC Educational Resources Information Center
Öllinger, Michael; Hammon, Stephanie; von Grundherr, Michael; Funke, Joachim
2015-01-01
Causal mapping is often recognized as a technique to support strategic decisions and actions in complex problem situations. Such drawing of causal structures is supposed to particularly foster the understanding of the interaction of the various system elements and to further encourage holistic thinking. It builds on the idea that humans make use…
ERIC Educational Resources Information Center
Öllinger, Michael; Hammon, Stephanie; von Grundherr, Michael; Funke, Joachim
2015-01-01
Causal mapping is often recognized as a technique to support strategic decisions and actions in complex problem situations. Such drawing of causal structures is supposed to particularly foster the understanding of the interaction of the various system elements and to further encourage holistic thinking. It builds on the idea that humans make use…
NASA Technical Reports Server (NTRS)
1994-01-01
The ChemScan UV-6100 is a spectrometry system originally developed by Biotronics Technologies, Inc. under a Small Business Innovation Research (SBIR) contract. It is marketed to the water and wastewater treatment industries, replacing "grab sampling" with on-line data collection. It analyzes the light absorbance characteristics of a water sample, simultaneously detects hundreds of individual wavelengths absorbed by chemical substances in a process solution, and quantifies the information. Spectral data is then processed by ChemScan analyzer and compared with calibration files in the system's memory in order to calculate concentrations of chemical substances that cause UV light absorbance in specific patterns. Monitored substances can be analyzed for quality and quantity. Applications include detection of a variety of substances, and the information provided enables an operator to control a process more efficiently.
NASA Technical Reports Server (NTRS)
1992-01-01
In the 1970's, NASA provided funding for development of an automatic blood analyzer for Skylab at the Oak Ridge National Laboratory (ORNL). ORNL devised "dynamic loading," which employed a spinning rotor to load, transfer, and analyze blood samples by centrifugal processing. A refined, commercial version of the system was produced by ABAXIS and is marketed as portable ABAXIS MiniLab MCA. Used in a doctor's office, the equipment can perform 80 to 100 chemical blood tests on a single drop of blood and report results in five minutes. Further development is anticipated.
1999-12-01
Defibrillator analyzers automate the inspection and preventive maintenance (IPM) testing of defibrillators. They need to be able to test at least four basic defibrillator performance characteristics: discharge energy, synchronized-mode operation, automated external defibrillation, and ECG monitoring. We prefer that they also be able to test a defibrillator's external noninvasive pacing function--but this is not essential if a facility already has a pacemaker analyzer that can perform this testing. In this Evaluation, we tested seven defibrillator analyzers from six suppliers. All seven units accurately measure the energies of a variety of discharge wave-forms over a wide range of energy levels--from 1 J for use in a neonatal intensive care unit to 360 J for use on adult patients requiring maximum discharge energy. Most of the analyzers are easy to use. However, only three of the evaluated units could perform the full range of defibrillator tests that we prefer. We rated these units Acceptable--Preferred. Three more units could perform four of the five tests, they could not test the pacing feature of a defibrillator. These units were rated Acceptable. The seventh unit could perform only discharge energy testing and synchronized-mode testing and was difficult to use. We rate that unit Acceptable--Not Recommended.
NASA Technical Reports Server (NTRS)
1993-01-01
Under a NASA Small Business Innovation Research (SBIR) contract, Axiomatics Corporation developed a shunting Dielectric Sensor to determine the nutrient level and analyze plant nutrient solutions in the CELSS, NASA's space life support program. (CELSS is an experimental facility investigating closed-cycle plant growth and food processing for long duration manned missions.) The DiComp system incorporates a shunt electrode and is especially sensitive to changes in dielectric property changes in materials at measurements much lower than conventional sensors. The analyzer has exceptional capabilities for predicting composition of liquid streams or reactions. It measures concentrations and solids content up to 100 percent in applications like agricultural products, petrochemicals, food and beverages. The sensor is easily installed; maintenance is low, and it can be calibrated on line. The software automates data collection and analysis.
Benner, William H.
1986-01-01
An oxygen analyzer which identifies and classifies microgram quantities of oxygen in ambient particulate matter and for quantitating organic oxygen in solvent extracts of ambient particulate matter. A sample is pyrolyzed in oxygen-free nitrogen gas (N.sub.2), and the resulting oxygen quantitatively converted to carbon monoxide (CO) by contact with hot granular carbon (C). Two analysis modes are made possible: (1) rapid determination of total pyrolyzable oxygen obtained by decomposing the sample at 1135.degree. C., or (2) temperature-programmed oxygen thermal analysis obtained by heating the sample from room temperature to 1135.degree. C. as a function of time. The analyzer basically comprises a pyrolysis tube containing a bed of granular carbon under N.sub.2, ovens used to heat the carbon and/or decompose the sample, and a non-dispersive infrared CO detector coupled to a mini-computer to quantitate oxygen in the decomposition products and control oven heating.
Benner, W.H.
1984-05-08
An oxygen analyzer which identifies and classifies microgram quantities of oxygen in ambient particulate matter and for quantitating organic oxygen in solvent extracts of ambient particulate matter. A sample is pyrolyzed in oxygen-free nitrogen gas (N/sub 2/), and the resulting oxygen quantitatively converted to carbon monoxide (CO) by contact with hot granular carbon (C). Two analysis modes are made possible: (1) rapid determination of total pyrolyzable obtained by decomposing the sample at 1135/sup 0/C, or (2) temperature-programmed oxygen thermal analysis obtained by heating the sample from room temperature to 1135/sup 0/C as a function of time. The analyzer basically comprises a pyrolysis tube containing a bed of granular carbon under N/sub 2/, ovens used to heat the carbon and/or decompose the sample, and a non-dispersive infrared CO detector coupled to a mini-computer to quantitate oxygen in the decomposition products and control oven heating.
NASA Technical Reports Server (NTRS)
1982-01-01
California Measurements, Inc.'s model PC-2 Aerosol Particle Analyzer is produced in both airborne and ground-use versions. Originating from NASA technology, it is a quick and accurate method of detecting minute amounts of mass loadings on a quartz crystal -- offers utility as highly sensitive detector of fine particles suspended in air. When combined with suitable air delivery system, it provides immediate information on the size distribution and mass concentrations of aerosols. William Chiang, obtained a NASA license for multiple crystal oscillator technology, and initially developed a particle analyzer for NASA use with Langley Research Center assistance. Later his company produced the modified PC-2 for commercial applications Brunswick Corporation uses the device for atmospheric research and in studies of smoke particles in Fires. PC-2 is used by pharmaceutical and chemical companies in research on inhalation toxicology and environmental health. Also useful in testing various filters for safety masks and nuclear installations.
Kelley, G.G.
1959-11-10
A multichannel pulse analyzer having several window amplifiers, each amplifier serving one group of channels, with a single fast pulse-lengthener and a single novel interrogation circuit serving all channels is described. A pulse followed too closely timewise by another pulse is disregarded by the interrogation circuit to prevent errors due to pulse pileup. The window amplifiers are connected to the pulse lengthener output, rather than the linear amplifier output, so need not have the fast response characteristic formerly required.
ERIC Educational Resources Information Center
Yates, Jennifer L.
2011-01-01
The purpose of this research study was to explore the process of learning and development of problem solving skills in radiologic technologists. The researcher sought to understand the nature of difficult problems encountered in clinical practice, to identify specific learning practices leading to the development of professional expertise, and to…
ERIC Educational Resources Information Center
Yates, Jennifer L.
2011-01-01
The purpose of this research study was to explore the process of learning and development of problem solving skills in radiologic technologists. The researcher sought to understand the nature of difficult problems encountered in clinical practice, to identify specific learning practices leading to the development of professional expertise, and to…
Pupils' Problem-Solving Processes in a Complex Computerized Learning Environment.
ERIC Educational Resources Information Center
Suomala, Jyrki; Alajaaski, Jarkko
2002-01-01
Describes a study that examined fifth-grade Finnish pupils' problem-solving processes in a LEGO/Logo technology-based learning environment. Results indicate that learning model and gender account for group differences in problem solving processes, and are interpreted as supporting the validity of discovery learning. (Author/LRW)
The Role of Prior Knowledge and Problem Contexts in Students' Explanations of Complex System
ERIC Educational Resources Information Center
Barth-Cohen, Lauren April
2012-01-01
The purpose of this dissertation is to study students' competencies in generating scientific explanations within the domain of complex systems, an interdisciplinary area in which students tend to have difficulties. While considering students' developing explanations of how complex systems work, I investigate the role of prior knowledge…
ERIC Educational Resources Information Center
Downton, Ann; Sullivan, Peter
2017-01-01
While the general planning advice offered to mathematics teachers seems to be to start with simple examples and build complexity progressively, the research reported in this article is a contribution to the body of literature that argues the reverse. That is, posing of appropriately complex tasks may actually prompt the use of more sophisticated…
The Role of Prior Knowledge and Problem Contexts in Students' Explanations of Complex System
ERIC Educational Resources Information Center
Barth-Cohen, Lauren April
2012-01-01
The purpose of this dissertation is to study students' competencies in generating scientific explanations within the domain of complex systems, an interdisciplinary area in which students tend to have difficulties. While considering students' developing explanations of how complex systems work, I investigate the role of prior knowledge…
NASA Technical Reports Server (NTRS)
1994-01-01
Measurement of the total organic carbon content in water is important in assessing contamination levels in high purity water for power generation, pharmaceutical production and electronics manufacture. Even trace levels of organic compounds can cause defects in manufactured products. The Sievers Model 800 Total Organic Carbon (TOC) Analyzer, based on technology developed for the Space Station, uses a strong chemical oxidizing agent and ultraviolet light to convert organic compounds in water to carbon dioxide. After ionizing the carbon dioxide, the amount of ions is determined by measuring the conductivity of the deionized water. The new technique is highly sensitive, does not require compressed gas, and maintenance is minimal.
2016-05-06
ISS047e106715 (05/06/2016) --- ESA (European Space Agency astronaut Tim Peake unpacks a cerebral and cochlear fluid pressure (CCFP) analyzer. The device is being tested to measure the pressure of the fluid in the skull, also known as intracranial pressure, which may increase due to fluid shifts in the body while in microgravity. It is hypothesized that the headward fluid shift that occurs during space flight leads to increased pressure in the brain, which may push on the back of the eye, causing it to change shape.
NASA Technical Reports Server (NTRS)
1983-01-01
A miniature gas chromatograph, a system which separates a gaseous mixture into its components and measures the concentration of the individual gases, was designed for the Viking Lander. The technology was further developed under National Institute for Occupational Safety and Health (NIOSH) and funded by Ames Research Center/Stanford as a toxic gas leak detection device. Three researchers on the project later formed Microsensor Technology, Inc. to commercialize the product. It is a battery-powered system consisting of a sensing wand connected to a computerized analyzer. Marketed as the Michromonitor 500, it has a wide range of applications.
NASA Technical Reports Server (NTRS)
Mayo, L. H.
1975-01-01
The contextual approach is discussed which undertakes to demonstrate that technology assessment assists in the identification of the full range of implications of taking a particular action and facilitates the consideration of alternative means by which the total affected social problem context might be changed by available project options. It is found that the social impacts of an application on participants, institutions, processes, and social interests, and the accompanying interactions may not only induce modifications in the problem contest delineated for examination with respect to the design, operations, regulation, and use of the posited application, but also affect related social problem contexts.
NASA Technical Reports Server (NTRS)
Mayo, L. H.
1975-01-01
The contextual approach is discussed which undertakes to demonstrate that technology assessment assists in the identification of the full range of implications of taking a particular action and facilitates the consideration of alternative means by which the total affected social problem context might be changed by available project options. It is found that the social impacts of an application on participants, institutions, processes, and social interests, and the accompanying interactions may not only induce modifications in the problem contest delineated for examination with respect to the design, operations, regulation, and use of the posited application, but also affect related social problem contexts.
2011-01-01
Background In order to improve the health of the most vulnerable groups in society, the WHO Commission on Social Determinants of Health (CSDH) called for multi-sectoral action, which requires research and policy on the multiple and inter-linking factors shaping health outcomes. Most conceptual tools available to researchers tend to focus on singular and specific social determinants of health (SDH) (e.g. social capital, empowerment, social inclusion). However, a new and innovative conceptual framework, known as social quality theory, facilitates a more complex and complete understanding of the SDH, with its focus on four domains: social cohesion, social inclusion, social empowerment and socioeconomic security, all within the same conceptual framework. This paper provides both an overview of social quality theory in addition to findings from a national survey of social quality in Australia, as a means of demonstrating the operationalisation of the theory. Methods Data were collected using a national random postal survey of 1044 respondents in September, 2009. Multivariate logistic regression analysis was conducted. Results Statistical analysis revealed that people on lower incomes (less than $45000) experience worse social quality across all of the four domains: lower socio-economic security, lower levels of membership of organisations (lower social cohesion), higher levels of discrimination and less political action (lower social inclusion) and lower social empowerment. The findings were mixed in terms of age, with people over 65 years experiencing lower socio-economic security, but having higher levels of social cohesion, experiencing lower levels of discrimination (higher social inclusion) and engaging in more political action (higher social empowerment). In terms of gender, women had higher social cohesion than men, although also experienced more discrimination (lower social inclusion). Conclusions Applying social quality theory allows researchers and policy
Ward, Paul R; Meyer, Samantha B; Verity, Fiona; Gill, Tiffany K; Luong, Tini C N
2011-08-05
In order to improve the health of the most vulnerable groups in society, the WHO Commission on Social Determinants of Health (CSDH) called for multi-sectoral action, which requires research and policy on the multiple and inter-linking factors shaping health outcomes. Most conceptual tools available to researchers tend to focus on singular and specific social determinants of health (SDH) (e.g. social capital, empowerment, social inclusion). However, a new and innovative conceptual framework, known as social quality theory, facilitates a more complex and complete understanding of the SDH, with its focus on four domains: social cohesion, social inclusion, social empowerment and socioeconomic security, all within the same conceptual framework. This paper provides both an overview of social quality theory in addition to findings from a national survey of social quality in Australia, as a means of demonstrating the operationalisation of the theory. Data were collected using a national random postal survey of 1044 respondents in September, 2009. Multivariate logistic regression analysis was conducted. Statistical analysis revealed that people on lower incomes (less than $45000) experience worse social quality across all of the four domains: lower socio-economic security, lower levels of membership of organisations (lower social cohesion), higher levels of discrimination and less political action (lower social inclusion) and lower social empowerment. The findings were mixed in terms of age, with people over 65 years experiencing lower socio-economic security, but having higher levels of social cohesion, experiencing lower levels of discrimination (higher social inclusion) and engaging in more political action (higher social empowerment). In terms of gender, women had higher social cohesion than men, although also experienced more discrimination (lower social inclusion). Applying social quality theory allows researchers and policy makers to measure and respond to the
NASA Astrophysics Data System (ADS)
Ruggles, Clive L. N.
Archaeoastronomical field survey typically involves the measurement of structural orientations (i.e., orientations along and between built structures) in relation to the visible landscape and particularly the surrounding horizon. This chapter focuses on the process of analyzing the astronomical potential of oriented structures, whether in the field or as a desktop appraisal, with the aim of establishing the archaeoastronomical "facts". It does not address questions of data selection (see instead Chap. 25, "Best Practice for Evaluating the Astronomical Significance of Archaeological Sites", 10.1007/978-1-4614-6141-8_25) or interpretation (see Chap. 24, "Nature and Analysis of Material Evidence Relevant to Archaeoastronomy", 10.1007/978-1-4614-6141-8_22). The main necessity is to determine the azimuth, horizon altitude, and declination in the direction "indicated" by any structural orientation. Normally, there are a range of possibilities, reflecting the various errors and uncertainties in estimating the intended (or, at least, the constructed) orientation, and in more formal approaches an attempt is made to assign a probability distribution extending over a spread of declinations. These probability distributions can then be cumulated in order to visualize and analyze the combined data from several orientations, so as to identify any consistent astronomical associations that can then be correlated with the declinations of particular astronomical objects or phenomena at any era in the past. The whole process raises various procedural and methodological issues and does not proceed in isolation from the consideration of corroborative data, which is essential in order to develop viable cultural interpretations.
Geelen, R; Bleijenberg, G
1999-04-01
An application in a psychogeriatric nursing home. This article describes the application of mediative behaviour therapy in a psychogeriatric nursing home. Behavioural interventions carried out by the nursing team addressed a variety of problems: quarreling between an institutionalised woman and her visiting husband, complaining about this staff by the husband to team members of another department, and the patient who let herself drop on the floor about once a week. Special regard is given to the analysis of the problems, the learning of appropriate responses by team members, as well as changing their cognitions and emotions about the problem behaviours. A meaningful reduction of the problem behaviours and of the burden experienced by team members was achieved.
The Concept of Influence and Its Use in Structuring Complex Decision Problems
1979-12-01
decade of application. [1,6,7,9,17] However, so far a theory has not been presented to show that deterministic sensitivity is the best criterion on... theory with the conviction that the logical methodology is equally applicable to all decision problems from deciding on new business ventures or...application of the influence concept to Sdeoision problems. 18 CHAPTER 2 Toward a Theory of Influence 2.1 Introduction Influence diagrams are an
Occurrence and use of complex resonances (poles in scattering and radiation problems)
Miller, E.K.
1981-12-15
In a wide variety of physics problems, especially those which involve wave phenomena such as in electromagnetics and acoustics, a behavior results that can be described by systems of linear (partial) differential equations. Solutions to such problems often can be expressed simply in the form of an exponential series. Some specific background material for this approach is discussed, and a variety of example applications is summarized. (WHK)
NASA Technical Reports Server (NTRS)
Lokerson, D. C. (Inventor)
1977-01-01
A speech signal is analyzed by applying the signal to formant filters which derive first, second and third signals respectively representing the frequency of the speech waveform in the first, second and third formants. A first pulse train having approximately a pulse rate representing the average frequency of the first formant is derived; second and third pulse trains having pulse rates respectively representing zero crossings of the second and third formants are derived. The first formant pulse train is derived by establishing N signal level bands, where N is an integer at least equal to two. Adjacent ones of the signal bands have common boundaries, each of which is a predetermined percentage of the peak level of a complete cycle of the speech waveform.
Hansen, A.D.
1987-09-28
An optical analyzer wherein a sample of particulate matter, and particularly of organic matter, which has been collected on a quartz fiber filter is placed in a combustion tube, and light from a light source is passed through the sample. The temperature of the sample is raised at a controlled rate and in a controlled atmosphere. The magnitude of the transmission of light through the sample is detected as the temperature is raised. A data processor, differentiator and a two pen recorder provide a chart of the optical transmission versus temperature and the rate of change of optical transmission versus temperature signatures (T and D) of the sample. These signatures provide information as to physical and chemical processes and a variety of quantitative and qualitative information about the sample. Additional information is obtained by repeating the run in different atmospheres and/or different rates or heating with other samples of the same particulate material collected on other filters. 7 figs.
Clewley, Richard; Stupple, Edward J N
2015-01-01
Many complex work environments rely heavily on cognitive operators using rules. Operators sometimes fail to implement rules, with catastrophic human, social and economic costs. Rule-based error is widely reported, yet the mechanisms of rule vulnerability have received less attention. This paper examines rule vulnerability in the complex setting of airline transport operations. We examined 'the stable approach criteria rule', which acts as a system defence during the approach to land. The study experimentally tested whether system state complexity influenced rule failure. The results showed increased uncertainty and dynamism led to increased likelihood of rule failure. There was also an interaction effect, indicating complexity from different sources can combine to further constrain rule-based response. We discuss the results in relation to recent aircraft accidents and suggest that 'rule-based error' could be progressed to embrace rule vulnerability, fragility and failure. This better reflects the influence that system behaviour and cognitive variety have on rule-based response. Practitioner Summary: In this study, we examined mechanisms of rule vulnerability in the complex setting of airline transport operations. The results suggest work scenarios featuring high uncertainty and dynamism constrain rule-based response, leading to rules becoming vulnerable, fragile or failing completely. This has significant implications for rule-intensive, safety critical work environments.
Hansen, Anthony D.
1989-01-01
An optical analyzer (10) wherein a sample (19) of particulate matter, and particularly of organic matter, which has been collected on a quartz fiber filter (20) is placed in a combustion tube (11), and light from a light source (14) is passed through the sample (19). The temperature of the sample (19) is raised at a controlled rate and in a controlled atmosphere. The magnitude of the transmission of light through the sample (19) is detected (18) as the temperature is raised. A data processor (23), differentiator (28) and a two pen recorder (24) provide a chart of the optical transmission versus temperature and the rate of change of optical transmission versus temperature signatures (T and D) of the sample (19). These signatures provide information as to physical and chemical processes and a variety of quantitative and qualitative information about the sample (19). Additional information is obtained by repeating the run in different atmospheres and/or different rates of heating with other samples of the same particulate material collected on other filters.
Hansen, Anthony D.
1989-02-07
An optical analyzer (10) wherein a sample (19) of particulate matter, and particularly of organic matter, which has been collected on a quartz fiber filter (20) is placed in a combustion tube (11), and light from a light source (14) is passed through the sample (19). The temperature of the sample (19) is raised at a controlled rate and in a controlled atmosphere. The magnitude of the transmission of light through the sample (19) is detected (18) as the temperature is raised. A data processor (23), differentiator (28) and a two pen recorder (24) provide a chart of the optical transmission versus temperature and the rate of change of optical transmission versus temperature signatures (T and D) of the sample (19). These signatures provide information as to physical and chemical processes and a variety of quantitative and qualitative information about the sample (19). Additional information is obtained by repeating the run in different atmospheres and/or different rates of heating with other samples of the same particulate material collected on other filters.
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Martín H, José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k)≤2(-(k+1)), making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.
Extension of the tridiagonal reduction (FEER) method for complex eigenvalue problems in NASTRAN
NASA Technical Reports Server (NTRS)
Newman, M.; Mann, F. I.
1978-01-01
As in the case of real eigenvalue analysis, the eigensolutions closest to a selected point in the eigenspectrum were extracted from a reduced, symmetric, tridiagonal eigenmatrix whose order was much lower than that of the full size problem. The reduction process was effected automatically, and thus avoided the arbitrary lumping of masses and other physical quantities at selected grid points. The statement of the algebraic eigenvalue problem admitted mass, damping, and stiffness matrices which were unrestricted in character, i.e., they might be real, symmetric or nonsymmetric, singular or nonsingular.
Bishop, Brian J; Dzidic, Peta L
2014-03-01
Causal layered analysis (CLA) is an emerging qualitative methodology adopted in the discipline of planning as an approach to deconstruct complex social issues. With psychologists increasingly confronted with complex, and "wicked" social and community issues, we argue that the discipline of psychology would benefit from adopting CLA as an analytical method. Until now, the application of CLA for data interpretation has generally been poorly defined and overwhelming for the novice. In this paper we propose an approach to CLA that provides a method for the deconstruction and analysis of complex social psychological issues. We introduce CLA as a qualitative methodology well suited for psychology, introduce the epistemological foundations of CLA, define a space for it adoption within the discipline, and, outline the steps for conducting a CLA using an applied example.
NASA Astrophysics Data System (ADS)
Kobylkin, Konstantin
2016-10-01
Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.
A jumbo problem: mapping the structure and functions of the nuclear pore complex
Fernandez-Martinez, Javier; Rout, Michael P
2012-01-01
Macromolecular assemblies can be intrinsically refractive to classical structural analysis, due to their size, complexity, plasticity and dynamic nature. One such assembly is the nuclear pore complex (NPC). The NPC is formed from ~450 copies of 30 different proteins, called nucleoporins, and is the sole mediator of exchange between the nucleus and the cytoplasm in eukaryotic cells. Despite significant progress, it has become increasingly clear that new approaches, integrating different sources of structural and functional data, will be needed to understand the functional biology of the NPC. Here, we discuss the latest approaches trying to address this challenge. PMID:22321828
Brooksbank, W.A. Jr.; Leddicotte, G.W.; Strain, J.E.; Hendon, H.H. Jr.
1961-11-14
A means was developed for continuously computing and indicating the isotopic assay of a process solution and for automatically controlling the process output of isotope separation equipment to provide a continuous output of the desired isotopic ratio. A counter tube is surrounded with a sample to be analyzed so that the tube is exactly in the center of the sample. A source of fast neutrons is provided and is spaced from the sample. The neutrons from the source are thermalized by causing them to pass through a neutron moderator, and the neutrons are allowed to diffuse radially through the sample to actuate the counter. A reference counter in a known sample of pure solvent is also actuated by the thermal neutrons from the neutron source. The number of neutrons which actuate the detectors is a function of a concentration of the elements in solution and their neutron absorption cross sections. The pulses produced by the detectors responsive to each neu tron passing therethrough are amplified and counted. The respective times required to accumulate a selected number of counts are measured by associated timing devices. The concentration of a particular element in solution may be determined by utilizing the following relation: T2/Ti = BCR, where B is a constant proportional to the absorption cross sections, T2 is the time of count collection for the unknown solution, Ti is the time of count collection for the pure solvent, R is the isotopic ratlo, and C is the molar concentration of the element to be determined. Knowing the slope constant B for any element and when the chemical concentration is known, the isotopic concentration may be readily determined, and conversely when the isotopic ratio is known, the chemical concentrations may be determined. (AEC)
ERIC Educational Resources Information Center
Wilmington Public Schools, DE.
The general purpose of the twelfth grade course is to help the student assume his role as a decision-maker in a democratic society. The nature and complexity of contemporary problems are examined using this guide to enable the student: 1) to analyze alternative solutions to these problems; 2) to develop attitudes and values appropriate to a…
USDA-ARS?s Scientific Manuscript database
Present-day environmental problems of Dryland East Asia are serious, and future prospects look especially disconcerting owing to current trends in population growth and economic development. Land degradation and desertification, invasive species, biodiversity losses, toxic waste and air pollution, a...
ERIC Educational Resources Information Center
Hyytinen, Heidi; Holma, Katariina; Toom, Auli; Shavelson, Richard J.; Lindblom-Ylänne, Sari
2014-01-01
The study utilized a multi-method approach to explore the connection between critical thinking and epistemological beliefs in a specific problem-solving situation. Data drawn from a sample of ten third-year bioscience students were collected using a combination of a cognitive lab and a performance task from the Collegiate Learning Assessment…
ERIC Educational Resources Information Center
Hill, Jennie; Powlitch, Stephanie; Furniss, Frederick
2008-01-01
The current study aimed to replicate and extend Rojahn et al. [Rojahn, J., Aman, M. G., Matson, J. L., & Mayville, E. (2003). "The aberrant behavior checklist and the behavior problems inventory: Convergent and divergent validity." "Research in Developmental Disabilities", 24, 391-404] by examining the convergent validity of the behavior problems…
ERIC Educational Resources Information Center
Hill, Jennie; Powlitch, Stephanie; Furniss, Frederick
2008-01-01
The current study aimed to replicate and extend Rojahn et al. [Rojahn, J., Aman, M. G., Matson, J. L., & Mayville, E. (2003). "The aberrant behavior checklist and the behavior problems inventory: Convergent and divergent validity." "Research in Developmental Disabilities", 24, 391-404] by examining the convergent validity of the behavior problems…
ETD QA CORE TEAM: AN ELOQUENT SOLUTION TO A COMPLEX PROBLEM
ETD QA CORE TEAM: AN ELOQUENT SOLUTION TO A COMPLEX PROBLEMThomas J. Hughes, QA and Records Manager, Experimental Toxicology Division (ETD), National Health and Environmental Effects Research Laboratory (NHEERL), ORD, U.S. EPA, RTP, NC 27709
ETD is the largest health divis...
The problem of ecological scaling in spatially complex, nonequilibrium ecological systems [chapter 3
Samuel A. Cushman; Jeremy Littell; Kevin McGarigal
2010-01-01
In the previous chapter we reviewed the challenges posed by spatial complexity and temporal disequilibrium to efforts to understand and predict the structure and dynamics of ecological systems. The central theme was that spatial variability in the environment and population processes fundamentally alters the interactions between species and their environments, largely...
ETD QA CORE TEAM: AN ELOQUENT SOLUTION TO A COMPLEX PROBLEM
ETD QA CORE TEAM: AN ELOQUENT SOLUTION TO A COMPLEX PROBLEMThomas J. Hughes, QA and Records Manager, Experimental Toxicology Division (ETD), National Health and Environmental Effects Research Laboratory (NHEERL), ORD, U.S. EPA, RTP, NC 27709
ETD is the largest health divis...
1988-02-01
involving conducting surfaces with complex geometries. I. Theory Mohamed F. El-Hewle and Richard 1. Cook Franh !. Seiler Research Laboratory, U.S. Air...can be generalized to a surface of an arbi- mittance are then employed with the new geometry- and trary coordinate function as follows. At a surface
Foucault as Complexity Theorist: Overcoming the Problems of Classical Philosophical Analysis
ERIC Educational Resources Information Center
Olssen, Mark
2008-01-01
This article explores the affinities and parallels between Foucault's Nietzschean view of history and models of complexity developed in the physical sciences in the twentieth century. It claims that Foucault's rejection of structuralism and Marxism can be explained as a consequence of his own approach which posits a radical ontology whereby the…
Simplifying the model of a complex heat-transfer system for solving the relay control problem
NASA Astrophysics Data System (ADS)
Shilin, A. A.; Bukreev, V. G.
2014-09-01
A method for approximating the high-dimensionality model of a complex heat-transfer system with time delay by a nonlinear second-order differential equation is proposed. The modeling results confirming adequacy of the nonlinear properties of the reduced and initial models and their correspondence to the controlled plant actual data are presented.
The Species Problem and the Value of Teaching and the Complexities of Species
ERIC Educational Resources Information Center
Chung, Carl
2004-01-01
Discussions on species taxa directly refer to a range of complex biological phenomena. Given these phenomena, biologists have developed and continue to appeal to a series of species concepts and do not have a clear definition for it as each species concept tells us part of the story or helps the biologists to explain and understand a subset of…
Analogize This! The Politics of Scale and the Problem of Substance in Complexity-Based Composition
ERIC Educational Resources Information Center
Roderick, Noah R.
2012-01-01
In light of recent enthusiasm in composition studies (and in the social sciences more broadly) for complexity theory and ecology, this article revisits the debate over how much composition studies can or should align itself with the natural sciences. For many in the discipline, the science debate--which was ignited in the 1970s, both by the…
Foucault as Complexity Theorist: Overcoming the Problems of Classical Philosophical Analysis
ERIC Educational Resources Information Center
Olssen, Mark
2008-01-01
This article explores the affinities and parallels between Foucault's Nietzschean view of history and models of complexity developed in the physical sciences in the twentieth century. It claims that Foucault's rejection of structuralism and Marxism can be explained as a consequence of his own approach which posits a radical ontology whereby the…
ERIC Educational Resources Information Center
Eseryel, Deniz; Ge, Xun; Ifenthaler, Dirk; Law, Victor
2011-01-01
Following a design-based research framework, this article reports two empirical studies with an educational MMOG, called "McLarin's Adventures," on facilitating 9th-grade students' complex problem-solving skill acquisition in interdisciplinary STEM education. The article discusses the nature of complex and ill-structured problem solving…
ERIC Educational Resources Information Center
Eseryel, Deniz; Ge, Xun; Ifenthaler, Dirk; Law, Victor
2011-01-01
Following a design-based research framework, this article reports two empirical studies with an educational MMOG, called "McLarin's Adventures," on facilitating 9th-grade students' complex problem-solving skill acquisition in interdisciplinary STEM education. The article discusses the nature of complex and ill-structured problem solving…
The Center for Computational Sciences and Engineering (CCSE) develops and applies advanced computational methodologies to solve large-scale scientific and engineering problems arising in the Department of Energy (DOE) mission areas involving energy, environmental, and industrial technology. The primary focus is in the application of structured-grid finite difference methods on adaptive grid hierarchies for compressible, incompressible, and low Mach number flows. The diverse range of scientific applications that drive the research typically involve a large range of spatial and temporal scales (e.g. turbulent reacting flows) and require the use of extremely large computing hardware, such as the 153,000-core computer, Hopper, at NERSC. The CCSE approach to these problems centers on the development and application of advanced algorithms that exploit known separations in scale; for many of the application areas this results in algorithms are several orders of magnitude more efficient than traditional simulation approaches.
Studying PubMed usages in the field for complex problem solving: Implications for tool design.
Mirel, Barbara; Song, Jean; Tonks, Jennifer Steiner; Meng, Fan; Xuan, Weijian; Ameziane, Rafiqa
2013-05-01
Many recent studies on MEDLINE-based information seeking have shed light on scientists' behaviors and associated tool innovations that may improve efficiency and effectiveness. Few if any studies, however, examine scientists' problem-solving uses of PubMed in actual contexts of work and corresponding needs for better tool support. Addressing this gap, we conducted a field study of novice scientists (14 upper level undergraduate majors in molecular biology) as they engaged in a problem solving activity with PubMed in a laboratory setting. Findings reveal many common stages and patterns of information seeking across users as well as variations, especially variations in cognitive search styles. Based on findings, we suggest tool improvements that both confirm and qualify many results found in other recent studies. Our findings highlight the need to use results from context-rich studies to inform decisions in tool design about when to offer improved features to users.
Studying PubMed usages in the field for complex problem solving: Implications for tool design
Song, Jean; Tonks, Jennifer Steiner; Meng, Fan; Xuan, Weijian; Ameziane, Rafiqa
2012-01-01
Many recent studies on MEDLINE-based information seeking have shed light on scientists’ behaviors and associated tool innovations that may improve efficiency and effectiveness. Few if any studies, however, examine scientists’ problem-solving uses of PubMed in actual contexts of work and corresponding needs for better tool support. Addressing this gap, we conducted a field study of novice scientists (14 upper level undergraduate majors in molecular biology) as they engaged in a problem solving activity with PubMed in a laboratory setting. Findings reveal many common stages and patterns of information seeking across users as well as variations, especially variations in cognitive search styles. Based on findings, we suggest tool improvements that both confirm and qualify many results found in other recent studies. Our findings highlight the need to use results from context-rich studies to inform decisions in tool design about when to offer improved features to users. PMID:24376375
Zarzycki, Paweł K; Portka, Joanna K
2015-09-01
Pentacyclic triterpenoids, particularly hopanoids, are organism-specific compounds and are generally considered as useful biomarkers that allow fingerprinting and classification of biological, environmental and geological samples. Simultaneous quantification of various hopanoids together with battery of related non-polar and low-molecular mass compounds may provide principal information for geochemical and environmental research focusing on both modern and ancient investigations. Target compounds can be derived from microbial biomass, water columns, sediments, coals, crude fossils or rocks. This create number of analytical problems due to different composition of the analytical matrix and interfering compounds and therefore, proper optimization of quantification protocols for such biomarkers is still the challenge. In this work we summarizing typical analytical protocols that were recently applied for quantification of hopanoids like compounds from different samples. Main steps including components of interest extraction, pre-purification, fractionation, derivatization and quantification involving gas (1D and 2D) as well as liquid separation techniques (liquid-liquid extraction, solid-phase extraction, planar and low resolution column chromatography, high-performance liquid chromatography) are described and discussed from practical point of view, mainly based on the experimental papers that were published within last two years, where significant increase in hopanoids research was noticed. The second aim of this review is to describe the latest research trends concerning determination of hopanoids and related low-molecular mass lipids analyzed in various samples including sediments, rocks, coals, crude oils and plant fossils as well as stromatolites and microbial biomass cultivated under different conditions. It has been found that majority of the most recent papers are based on uni- or bivariate approach for complex data analysis. Data interpretation involves
Criteria for assessing problem solving and decision making in complex environments
NASA Technical Reports Server (NTRS)
Orasanu, Judith
1993-01-01
Training crews to cope with unanticipated problems in high-risk, high-stress environments requires models of effective problem solving and decision making. Existing decision theories use the criteria of logical consistency and mathematical optimality to evaluate decision quality. While these approaches are useful under some circumstances, the assumptions underlying these models frequently are not met in dynamic time-pressured operational environments. Also, applying formal decision models is both labor and time intensive, a luxury often lacking in operational environments. Alternate approaches and criteria are needed. Given that operational problem solving and decision making are embedded in ongoing tasks, evaluation criteria must address the relation between those activities and satisfaction of broader task goals. Effectiveness and efficiency become relevant for judging reasoning performance in operational environments. New questions must be addressed: What is the relation between the quality of decisions and overall performance by crews engaged in critical high risk tasks? Are different strategies most effective for different types of decisions? How can various decision types be characterized? A preliminary model of decision types found in air transport environments will be described along with a preliminary performance model based on an analysis of 30 flight crews. The performance analysis examined behaviors that distinguish more and less effective crews (based on performance errors). Implications for training and system design will be discussed.
Criteria for assessing problem solving and decision making in complex environments
NASA Technical Reports Server (NTRS)
Orasanu, Judith
1993-01-01
Training crews to cope with unanticipated problems in high-risk, high-stress environments requires models of effective problem solving and decision making. Existing decision theories use the criteria of logical consistency and mathematical optimality to evaluate decision quality. While these approaches are useful under some circumstances, the assumptions underlying these models frequently are not met in dynamic time-pressured operational environments. Also, applying formal decision models is both labor and time intensive, a luxury often lacking in operational environments. Alternate approaches and criteria are needed. Given that operational problem solving and decision making are embedded in ongoing tasks, evaluation criteria must address the relation between those activities and satisfaction of broader task goals. Effectiveness and efficiency become relevant for judging reasoning performance in operational environments. New questions must be addressed: What is the relation between the quality of decisions and overall performance by crews engaged in critical high risk tasks? Are different strategies most effective for different types of decisions? How can various decision types be characterized? A preliminary model of decision types found in air transport environments will be described along with a preliminary performance model based on an analysis of 30 flight crews. The performance analysis examined behaviors that distinguish more and less effective crews (based on performance errors). Implications for training and system design will be discussed.
Dadds, Mark Richard; Cauchi, Avril Jessica; Wimalaweera, Subodha; Hawes, David John; Brennan, John
2012-10-30
Impairments in emotion recognition skills are a trans-diagnostic indicator of early mental health problems and may be responsive to intervention. We report on a randomized controlled trial of "Emotion-recognition-training" (ERT) versus treatment-as-usual (TAU) with N=195 mixed diagnostic children (mean age 10.52 years) referred for behavioral/emotional problems measured at pre- and 6 months post-treatment. We tested overall outcomes plus moderation and mediation models, whereby diagnostic profile was tested as a moderator of change. ERT had no impact on the group as a whole. Diagnostic status of the child did not moderate outcomes; however, levels of callous-unemotional (CU) traits moderated outcomes such that children with high CU traits responded less well to TAU, while ERT produced significant improvements in affective empathy and conduct problems in these children. Emotion recognition training has potential as an adjunctive intervention specifically for clinically referred children with high CU traits, regardless of their diagnostic status.
1987-10-01
AUTHOR(*) S. CONTRACT OR GRANT NUMBER(*) Richard L. Henneman and William B. Rouse MDA903-2- C -Ol45 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM...n measure Qf c Rlity. The literature review [ Henneman and I’ Rouse 1986] also suggested that an appropriate dependent measure of complexity is the... Henneman , R.L., and W.B. Rouse. Measures of human performance in fault diagnosis tasks. j= Kansactions on Sysems, Man and C .7Xkiat i’. its, SMC-14, (1):99
Infinite-range exterior complex scaling as a perfect absorber in time-dependent problems
Scrinzi, Armin
2010-05-15
We introduce infinite range exterior complex scaling (irECS) which provides for complete absorption of outgoing flux in numerical solutions of the time-dependent Schroedinger equation with strong infrared fields. This is demonstrated by computing high harmonic spectra and wave-function overlaps with the exact solution for a one-dimensional model system and by three-dimensional calculations for the H atom and an Ne atom model. We lay out the key ingredients for correct implementation and identify criteria for efficient discretization.
OBSESSIVE COMPULSIVE DISORDER: IS IT A PROBLEM OF COMPLEX MOTOR PROGRAMMING?*
Khanna, Sumant; Mukundan, C.R.; Channabasavanna, S.M.
1987-01-01
SUMMARY 44 subjects with Obsessive compulsive disorder (OCD) and 40 normals were compared using an experimental paradigm involving recording of the bereitschaftspotential. A decreased onset latency and increased amplitude was found in the OCD sample as compared to normals. A neurophysiological substrate for the bereitschaftspotential has been proposed. The implications of these findings in OCD as compared to Gilles de la Tourette syndrome, and for a focal neuro-physiological dysfunction have also been discussed. The findings of this study implicate a dysfunction in complex motor programming in OCD, with the possibility of this dysfunction being in the prefrontal area. PMID:21927207
Traveling salesman problems with PageRank Distance on complex networks reveal community structure
NASA Astrophysics Data System (ADS)
Jiang, Zhongzhou; Liu, Jing; Wang, Shuai
2016-12-01
In this paper, we propose a new algorithm for community detection problems (CDPs) based on traveling salesman problems (TSPs), labeled as TSP-CDA. Since TSPs need to find a tour with minimum cost, cities close to each other are usually clustered in the tour. This inspired us to model CDPs as TSPs by taking each vertex as a city. Then, in the final tour, the vertices in the same community tend to cluster together, and the community structure can be obtained by cutting the tour into a couple of paths. There are two challenges. The first is to define a suitable distance between each pair of vertices which can reflect the probability that they belong to the same community. The second is to design a suitable strategy to cut the final tour into paths which can form communities. In TSP-CDA, we deal with these two challenges by defining a PageRank Distance and an automatic threshold-based cutting strategy. The PageRank Distance is designed with the intrinsic properties of CDPs in mind, and can be calculated efficiently. In the experiments, benchmark networks with 1000-10,000 nodes and varying structures are used to test the performance of TSP-CDA. A comparison is also made between TSP-CDA and two well-established community detection algorithms. The results show that TSP-CDA can find accurate community structure efficiently and outperforms the two existing algorithms.
Motion artifacts in MRI: A complex problem with many partial solutions.
Zaitsev, Maxim; Maclaren, Julian; Herbst, Michael
2015-10-01
Subject motion during magnetic resonance imaging (MRI) has been problematic since its introduction as a clinical imaging modality. While sensitivity to particle motion or blood flow can be used to provide useful image contrast, bulk motion presents a considerable problem in the majority of clinical applications. It is one of the most frequent sources of artifacts. Over 30 years of research have produced numerous methods to mitigate or correct for motion artifacts, but no single method can be applied in all imaging situations. Instead, a "toolbox" of methods exists, where each tool is suitable for some tasks, but not for others. This article reviews the origins of motion artifacts and presents current mitigation and correction methods. In some imaging situations, the currently available motion correction tools are highly effective; in other cases, appropriate tools still need to be developed. It seems likely that this multifaceted approach will be what eventually solves the motion sensitivity problem in MRI, rather than a single solution that is effective in all situations. This review places a strong emphasis on explaining the physics behind the occurrence of such artifacts, with the aim of aiding artifact detection and mitigation in particular clinical situations. © 2015 Wiley Periodicals, Inc.
Motion Artefacts in MRI: a Complex Problem with Many Partial Solutions
Zaitsev, Maxim; Maclaren, Julian.; Herbst, Michael
2015-01-01
Subject motion during magnetic resonance imaging (MRI) has been problematic since its introduction as a clinical imaging modality. While sensitivity to particle motion or blood flow can be used to provide useful image contrast, bulk motion presents a considerable problem in the majority of clinical applications. It is one of the most frequent sources of artefacts. Over 30 years of research have produced numerous methods to mitigate or correct for motion artefacts, but no single method can be applied in all imaging situations. Instead, a ‘toolbox’ of methods exists, where each tool is suitable for some tasks, but not for others. This article reviews the origins of motion artefacts and presents current mitigation and correction methods. In some imaging situations, the currently available motion correction tools are highly effective; in other cases, appropriate tools still need to be developed. It seems likely that this multifaceted approach will be what eventually solves the motion sensitivity problem in MRI, rather than a single solution that is effective in all situations. This review places a strong emphasis on explaining the physics behind the occurrence of such artefacts, with the aim of aiding artefact detection and mitigation in particular clinical situations. PMID:25630632
Frank, Harry
2011-11-01
Frank and Frank et al. (1982-1987) administered a series of age-graded training and problem-solving tasks to samples of Eastern timber wolf (C. lupus lycaon) and Alaskan Malamute (C. familiaris) pups to test Frank's (Zeitschrift für Tierpsychologie 53:389-399, 1980) model of the evolution of information processing under conditions of natural and artificial selection. Results confirmed the model's prediction that wolves should perform better than dogs on problem-solving tasks and that dogs should perform better than wolves on training tasks. Further data collected at the University of Connecticut in 1983 revealed a more complex and refined picture, indicating that species differences can be mediated by a number of factors influencing wolf performance, including socialization regimen (hand-rearing vs. mother-rearing), interactive effects of socialization on the efficacy of both rewards and punishments, and the flexibility to select learning strategies that experimenters might not anticipate.
Epting, Shane
2016-12-01
Transportation infrastructure tremendously affects the quality of life for urban residents, influences public and mental health, and shapes social relations. Historically, the topic is rich with social and political controversy and the resultant transit systems in the United States cause problems for minority residents and issues for the public. Environmental justice frameworks provide a means to identify and address harms that affect marginalized groups, but environmental justice has limits that cannot account for the mainstream population. To account for this condition, I employ a complex moral assessment measure that provides a way to talk about harms that affect the public.
Lee, W; Kim, T-S; Cho, M; Lee, S
2005-01-01
In studying bioelectromagnetic problems, finite element method offers several advantages over other conventional methods such as boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropy. Mesh generation is the first requirement in the finite element analysis and there are many different approaches in mesh generation. However conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes, resulting in numerous elements in the smaller volume regions, thereby increasing computational load and demand. In this work, we present an improved content-adaptive mesh generation scheme that is efficient and fast along with options to change the contents of meshes. For demonstration, mesh models of the head from a volume MRI are presented in 2-D and 3-D.
NASA Astrophysics Data System (ADS)
Wassereau, Thibault; Ablitzer, Frédéric; Pézerat, Charles; Guyader, Jean-Louis
2017-07-01
This paper addresses the problem of estimating the local viscoelastic parameters of sandwich beams. An original procedure involving an inverse vibratory method (Force Analysis Technique) and the Timoshenko beam theory is detailed and applied experimentally on a sample presenting a honeycomb core. The major philosophy relies in considering multi-layer beams as equivalent homogeneous structures. This simplified approach is thought to be more representative of the global dynamic behaviour, in addition the reduction of degrees of freedom is obviously an improvement for modelling on Finite Element software. When compared to other usual approaches, the method developed in this paper shows a very good agreement between the experimental sandwich beam and the homogeneous model, which highlights interesting insights for applying it to industrial structures. The local aspect, the robustness and the self-regularization properties are verified on a wide frequency range, making the procedure possibly efficient for characterization of structures on a production line, flaw detection and Structural Health Monitoring.
NASA Astrophysics Data System (ADS)
Quinn, J.; Reed, P. M.; Giuliani, M.; Castelletti, A.
2016-12-01
Optimizing the operations of multi-reservoir systems poses several challenges: 1) the high dimension of the problem's states and controls, 2) the need to balance conflicting multi-sector objectives, and 3) understanding how uncertainties impact system performance. These difficulties motivated the development of the Evolutionary Multi-Objective Direct Policy Search (EMODPS) framework, in which multi-reservoir operating policies are parameterized in a given family of functions and then optimized for multiple objectives through simulation over a set of stochastic inputs. However, properly framing these objectives remains a severe challenge and a neglected source of uncertainty. Here, we use EMODPS to optimize operating policies for a 4-reservoir system in the Red River Basin in Vietnam, exploring the consequences of optimizing to different sets of objectives related to 1) hydropower production, 2) meeting multi-sector water demands, and 3) providing flood protection to the capital city of Hanoi. We show how coordinated operation of the reservoirs can differ markedly depending on how decision makers weigh these concerns. Moreover, we illustrate how formulation choices that emphasize the mean, tail, or variability of performance across objective combinations must be evaluated carefully. Our results show that these choices can significantly improve attainable system performance, or yield severe unintended consequences. Finally, we show that satisfactory validation of the operating policies on a set of out-of-sample stochastic inputs depends as much or more on the formulation of the objectives as on effective optimization of the policies. These observations highlight the importance of carefully considering how we abstract stakeholders' objectives and of iteratively optimizing and visualizing multiple problem formulation hypotheses to ensure that we capture the most important tradeoffs that emerge from different stakeholder preferences.
The problem of the Culex pipiens complex in the South Pacific (including Australia)*
Dobrotworsky, N. V.
1967-01-01
There are three representatives of the Culex pipiens complex in the South Pacific. C. p. fatigans is the most common and most widely distributed subspecies; it is closely associated with man. The males can be readily distinguished by the structure of the phallosome of the terminalia. C. p. molestus is spread over the southern part of Australia and in Tasmania; it also is a domestic mosquito. Throughout its extensive range in Australia, it exhibits all the biological traits that distinguish it from C. p. pipiens. C. p. australicus is widely distributed over the mainland of Australia and in Tasmania. It is superficially similar to C. p. fatigans but can be distinguished from C. p. pallens by the structure of the phallosome. It is primarily a rural non-man-biting mosquito. C. p. australicus is probably a relatively ancient member of the Australian fauna that may have evolved in the southern temperate zone. ImagesFIG. 3FIG. 4FIG. 2 PMID:5300062
An unstructured-grid software system for solving complex aerodynamic problems
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Pirzadeh, Shahyar; Parikh, Paresh
1995-01-01
A coordinated effort has been underway over the past four years to elevate unstructured-grid methodology to a mature level. The goal of this endeavor is to provide a validated capability to non-expert users for performing rapid aerodynamic analysis and design of complex configurations. The Euler component of the system is well developed, and is impacting a broad spectrum of engineering needs with capabilities such as rapid grid generation and inviscid flow analysis, inverse design, interactive boundary layers, and propulsion effects. Progress is also being made in the more tenuous Navier-Stokes component of the system. A robust grid generator is under development for constructing quality thin-layer tetrahedral grids, along with a companion Navier-Stokes flow solver. This paper presents an overview of this effort, along with a perspective on the present and future status of the methodology.
Effective descriptions of complex quantum systems: path integrals and operator ordering problems
NASA Astrophysics Data System (ADS)
Eckern, U.; Gruber, M. J.; Schwab, P.
2005-09-01
[Dedicated to Bernhard Mühlschlegel on the occasion ofhis 80th birthday]We study certain aspects of the effective, occasionally called collective, description of complex quantum systems within the framework of the path integral formalism, in which the environment is integrated out. Generalising the standard Feynman-Vernon Caldeira-Leggett model to include a non-linear coupling between particle and environment, and considering a particular spectral density of the coupling, a coordinate-dependent mass (or velocity-dependent potential) is obtained. The related effective quantum theory, which depends on the proper discretisation of the path integral, is derived and discussed. As a result, we find that in general a simple effective low-energy Hamiltonian, in which only the coordinate-dependent mass enters, cannot be formulated. The quantum theory of weakly coupled superconductors and the quantum dynamics of vortices in Josephson junction arrays are physical examples where these considerations, in principle, are of relevance.
Fish, Rob D; Ioris, Antonio A R; Watson, Nigel M
2010-11-01
This paper examines governance requirements for integrating water and agricultural management (IWAM). The institutional arrangements for the agriculture and water sectors are complex and multi-dimensional, and integration cannot therefore be achieved through a simplistic 'additive' policy process. Effective integration requires the development of a new collaborative approach to governance that is designed to cope with scale dependencies and interactions, uncertainty and contested knowledge, and interdependency among diverse and unequal interests. When combined with interdisciplinary research, collaborative governance provides a viable normative model because of its emphasis on reciprocity, relationships, learning and creativity. Ultimately, such an approach could lead to the sorts of system adaptations and transformations that are required for IWAM. Copyright © 2009 Elsevier B.V. All rights reserved.
Brueck, Martin; Heidt, Martin C; Szente-Varga, Michael; Bandorski, Dirk; Kramer, Wilfried; Vogt, Paul R
2006-12-01
Conventional surgical treatment of complex aortic pathologies involving several thoracoabdominal aortic segments necessitates extended incisions or subsequent surgeries, resulting in significant mortality and morbidity rates. The combination of surgery and simultaneous stenting in the operating theater may reduce the surgical trauma. A total of nine patients (62 +/- 10 years, range 44-70) underwent a combined surgical and endovascular treatment of thoracic or thoracoabdominal aortic aneurysms or chronic dissection. Five patients were treated with viscero-renal artery translocation followed by transfemoral stenting of the entire thoracoabdominal aorta. Two patients underwent debranching of the supraaortic vessels followed by immediate transfemoral stenting of the aortic arch, and two patients with a history of an ascending aortic aneurysm repair were treated with open surgical debranching of the supraaortic trunks and repair of the ascending aorta and aortic arch with elephant trunk technique. Preoperatively, magnetic resonance imaging was used to check supraaortic and intracranial vessels as well as the completeness of the Circle of Willisi prior to arch stenting and/or supraaortic vessel surgery. Cerebrospinal fluid drainage and induced mild hypertension have been used for one-step thoracoabdominal aortic stenting. Thirty-day mortality rate and incidence of paraplegia was 0%. There was a single reversible perioperative stroke after aortic arch stenting. One patient required temporary renal replacement therapy using continuous arterio-venous hemofiltration. There was one early reoperation at the superior mesenteric artery after viscero-renal translocation. Four type I endoleaks occurred in three patients requiring two interventions. All patients have been discharged to home. The innovative combination of simultaneous conventional surgery and stenting reduces the operative burden for patients with complex aortic pathologies involving several segments of the thoracic
NASA Astrophysics Data System (ADS)
Canfora, Fabrizio
2016-10-01
I analyze the quantum mechanical scattering off a topological defect (such as a Dirac monopole) as well as a Yukawa-like potential(s) representing the typical effects of strong interactions. This system, due to the presence of a short-range potential, can be analyzed using the powerful technique of the complex angular momenta which, so far, has not been employed in the presence of monopoles (nor of other topological solitons). Due to the fact that spatial spherical symmetry is achieved only up to internal rotations, the partial wave expansion becomes very similar to the Jacob-Wick helicity amplitudes for particles with spin. However, since the angular-momentum operator has an extra "internal" contribution, fixed cuts in the complex angular momentum plane appear. Correspondingly, the background integral in the Regge formula does not decrease for large values of |cos θ | (namely, large values of the Mandelstam variable s ). Hence, the experimental observation of this kind of behavior could be a direct signal of nontrivial topological structures in strong interactions. The possible relations of these results with the soft Pomeron are shortly analyzed.
Bengochea, M; Alvarez, I; Toledo, R; Carretto, E; Forteza, D
2010-01-01
The National Kidney Transplant Program with cadaveric donors is based on centralized and unique waitlist, serum bank, and allocation criteria, approved by Instituto Nacional de Donación y Trasplante (INDT) in agreement with clinical teams. The median donor rates over last 3 years is 20 per million population and the median number of waitlist candidates is 450. The increased number of waiting list patients and the rapid aging of our populations demanded strategies for donor acceptance, candidate assignment, and analysis of more efficient and equitable allocation models. The objectives of the new national allocation system were to improve posttransplant patient and graft survivals, allow equal access to transplantation, and reduce waitlist times. The objective of this study was to analyze variables in our current allocation system and to create a mathematical/simulation model to evaluate a new allocation system. We compared candidates and transplanted patients for gender, age, ABO blood group, human leukocyte agents (HLA), percentage of reactive antibodies (PRA), and waiting list and dialysis times. Only 2 factors showed differences: highly sensitized and patients >65 years old (Bernoulli test). An agreement between INDT and Engineering Faculty yielded a major field of study. During 2008 the data analysis and model building began. The waiting list data of the last decade of donors and transplants were processed to develop a virtual model. We used inputs of candidates and donors, with outputs and structure of the simulation system to evaluate the proposed changes. Currently, the INDT and the Mathematics and Statistics Institute are working to develop a simulation model, that is able to analyze our new national allocation system.
Analyzing Bilingual Education Costs.
ERIC Educational Resources Information Center
Bernal, Joe J.
This paper examines the particular problems involved in analyzing the costs of bilingual education and suggests that cost analysis of bilingual education requires a fundamentally different approach than that followed in other recent school finance studies. Focus of the discussion is the Intercultural Development Research Association's (IDRA)…
Gómez-Hernández, J Jaime
2006-01-01
It is difficult to define complexity in modeling. Complexity is often associated with uncertainty since modeling uncertainty is an intrinsically difficult task. However, modeling uncertainty does not require, necessarily, complex models, in the sense of a model requiring an unmanageable number of degrees of freedom to characterize the aquifer. The relationship between complexity, uncertainty, heterogeneity, and stochastic modeling is not simple. Aquifer models should be able to quantify the uncertainty of their predictions, which can be done using stochastic models that produce heterogeneous realizations of aquifer parameters. This is the type of complexity addressed in this article.
Mumtaz, M M; George, J D; Gold, K W; Cibulas, W; DeRosa, C T
1996-01-01
Polycyclic Aromatic Hydrocarbons (PAHs) are a group of chemicals that are formed during the incomplete burning of coal, oil, gas, wood, garbage, or other organic substances, such as tobacco and charbroiled meat. There are more than 100 PAHs. PAHs generally occur as complex mixtures (for example, as part of products such as soot), not as single compounds. PAHs are found throughout the environment in the air, water, and soil. As part of its mandate, the Agency for Toxic Substances and Disease Registry (ATSDR) prepares toxicological profiles on hazardous chemicals, including PAHs (ATSDR, 1995), found at facilities on the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) National Priorities List (NPL) and which pose the most significant potential threat to human health, as determined by ATSDR and the Environmental Protection Agency (EPA). These profiles include information on health effects of chemicals from different routes and durations of exposure, their potential for exposure, regulations and advisories, and the adequacy of the existing database. Assessing the health effects of PAHs is a major challenge because environmental exposures to these chemicals are usually to complex mixtures of PAHs with other chemicals. The biological consequences of human exposure to mixtures of PAHs depend on the toxicity, carcinogenic and noncarcinogenic, of the individual components of the mixture, the types of interactions among them, and confounding factors that are not thoroughly understood. Also identified are components of exposure and health effects research needed on PAHs that will allow estimation of realistic human health risks posed by exposures to PAHs. The exposure assessment component of research should focus on (1) development of reliable analytical methods for the determination of bioavailable PAHs following ingestion, (2) estimation of bioavailable PAHs from environmental media, particularly the determination of particle-bound PAHs, (3
On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs
NASA Astrophysics Data System (ADS)
Jiang, Minghui
We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.
Combination Therapies for Lysosomal Storage Diseases: A Complex Answer to a Simple Problem
Macauley, Shannon L
2017-01-01
Lysosomal storage diseases (LSDs) are a group of 40–50 rare monogenic disorders that result in disrupted lysosomal function and subsequent lysosomal pathology. Depending on the protein or enzyme deficiency associated with each disease, LSDs affect an array of organ systems and elicit a complex set of secondary disease mechanisms that make many of these disorders difficult to fully treat. The etiology of most LSDs is known and the innate biology of lysosomal enzymes favors therapeutic intervention, yet most attempts at treating LSDs with enzyme replacement strategies fall short of being curative. Even with the advent of more sophisticated approaches, like substrate reduction therapy, pharmacologic chaperones, gene therapy or stem cell therapy, comprehensive treatments for LSDs have yet to be achieved. Given the limitations with individual therapies, recent research has focused on using a combination approach to treat LSDs. By coupling protein-, cell-, and gene- based therapies with small molecule drugs, researchers have found greater success in eradicating the clinical features of disease. This review seeks to discuss the positive and negatives of singular therapies used to treat LSDs, and discuss how, in combination, studies have demonstrated a more holistic benefit on pathological and functional parameters. By optimizing routes of delivery, therapeutic timing, and targeting secondary disease mechanisms, combination therapy represents the future for LSD treatment. PMID:27491211
NASA Astrophysics Data System (ADS)
Dawes, W. N.
This paper describes some recent developments in the application of unstructured mesh, solution-adaptive methods to the solution of the three-dimensional Navier-Stokes equations in turbomachinery flows. By adopting a simple, pragmatic but systematic approach to mesh generation, the variety of simulations which can be attempted ranges from simple turbomachinery blade-blade primary paths towards complex secondary gas paths and can include the interactions between the two paths. By adopting a hierarchical data structure, mesh refinement and derefinement can be performed sufficiently economically that it becomes practical to perform unsteady flow simulations with zones of mesh refinement ‘following’ unsteady flow features, like vortices and wakes, through a coarse background mesh. The combined benefits of the approach result in a powerful analytical ability. Solutions for a wide range of steady flows are presented including a transonic compressor rotor, a centrifugal impellor, the internal coolant passage of a radial inflow turbine and a turbine disc-cavity flow. Unsteady solutions are presented for a cylinder shedding vortices and for a turbine wake/rotor interaction.
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2014-01-01
This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.
Application of a low order panel method to complex three-dimensional internal flow problems
NASA Technical Reports Server (NTRS)
Ashby, D. L.; Sandlin, D. R.
1986-01-01
An evaluation of the ability of a low order panel method to predict complex three-dimensional internal flow fields was made. The computer code VSAERO was used as a basis for the evaluation. Guidelines for modeling internal flow geometries were determined and the effects of varying the boundary conditions and the use of numerical approximations on the solutions accuracy were studied. Several test cases were run and the results were compared with theoretical or experimental results. Modeling an internal flow geometry as a closed box with normal velocities specified on an inlet and exit face provided accurate results and gave the user control over the boundary conditions. The values of the boundary conditions greatly influenced the amount of leakage an internal flow geometry suffered and could be adjusted to eliminate leakage. The use of the far-field approximation to reduce computation time influenced the accuracy of a solution and was coupled with the values of the boundary conditions needed to eliminate leakage. The error induced in the influence coefficients by using the far-field approximation was found to be dependent on the type of influence coefficient, the far-field radius, and the aspect ratio of the panels.
NASA Astrophysics Data System (ADS)
Develaki, Maria
2008-09-01
In view of the complex problems of this age, the question of the socio-ethical dimension of science acquires particular importance. We approach this matter from a philosophical and sociological standpoint, looking at such focal concerns as the motivation, purposes and methods of scientific activity, the ambivalence of scientific research and the concomitant risks, and the conflict between research freedom and external socio-political intervention. We then point out the impediments to the effectiveness of cross-disciplinary or broader meetings for addressing these complex problems and managing the associated risks, given the difficulty in communication between experts in different fields and non-experts, difficulties that education is challenged to help resolve. We find that the social necessity of informed decision-making on the basis of cross-disciplinary collaboration is reflected in the newer curricula, such as that of Greece, in aims like the acquisition of cross-subject knowledge and skills, and the ability to make decisions on controversial issues involving value conflicts. The interest and the reflections of the science education community in these matters increase its—traditionally limited—contribution to the theoretical debate on education and, by extension, the value of science education in the education system.
A new approach to the problem of multiple comparisons in the genetic dissection of complex traits.
Weller, J I; Song, J Z; Heyen, D W; Lewin, H A; Ron, M
1998-01-01
Saturated genetic marker maps are being used to map individual genes affecting quantitative traits. Controlling the "experimentwise" type-I error severely lowers power to detect segregating loci. For preliminary genome scans, we propose controlling the "false discovery rate," that is, the expected proportion of true null hypotheses within the class of rejected null hypotheses. Examples are given based on a granddaughter design analysis of dairy cattle and simulated backcross populations. By controlling the false discovery rate, power to detect true effects is not dependent on the number of tests performed. If no detectable genes are segregating, controlling the false discovery rate is equivalent to controlling the experimentwise error rate. If quantitative loci are segregating in the population, statistical power is increased as compared to control of the experimentwise type-I error. The difference between the two criteria increases with the increase in the number of false null hypotheses. The false discovery rate can be controlled at the same level whether the complete genome or only part of it has been analyzed. Additional levels of contrasts, such as multiple traits or pedigrees, can be handled without the necessity of a proportional decrease in the critical test probability. PMID:9832544
Scheib, H.; Pleiss, J.; Kovac, A.; Paltauf, F.; Schmid, R. D.
1999-01-01
The lipases from Rhizopus and Rhizomucor are members of the family of Mucorales lipases. Although they display high sequence homology, their stereoselectivity toward triradylglycerols (sn-2 substituted triacylglycerols) varies. Four different triradylglycerols were investigated, which were classified into two groups: flexible substrates with rotatable O'-C1' ether or ester bonds adjacent to C2 of glycerol and rigid substrates with a rigid N'-C1' amide bond or a phenyl ring in sn-2. Although Rhizopus lipase shows opposite stereopreference for flexible and rigid substrates (hydrolysis in sn-1 and sn-3, respectively), Rhizomucor lipase hydrolyzes both groups of triradylglycerols preferably in sn-1. To explain these experimental observations, computer-aided molecular modeling was applied to study the molecular basis of stereoselectivity. A generalized model for both lipases of the Mucorales family highlights the residues mediating stereoselectivity: (1) L258, the C-terminal neighbor of the catalytic histidine, and (2) G266, which is located in a loop contacting the glycerol backbone of a bound substrate. Interactions with triradylglycerol substrates are dominated by van der Waals contacts. Stereoselectivity can be predicted by analyzing the value of a single substrate torsion angle that discriminates between sn-1 and sn-3 stereopreference for all substrates and lipases investigated here. This simple model can be easily applied in enzyme and substrate engineering to predict Mucorales lipase variants and synthetic substrates with desired stereoselectivity. PMID:10210199
A simple framework for a complex problem? Predicting wildlife-vehicle collisions.
Visintin, Casey; van der Ree, Rodney; McCarthy, Michael A
2016-09-01
Collisions of vehicles with wildlife kill and injure animals and are also a risk to vehicle occupants, but preventing these collisions is challenging. Surveys to identify problem areas are expensive and logistically difficult. Computer modeling has identified correlates of collisions, yet these can be difficult for managers to interpret in a way that will help them reduce collision risk. We introduce a novel method to predict collision risk by modeling hazard (presence and movement of vehicles) and exposure (animal presence) across geographic space. To estimate the hazard, we predict relative traffic volume and speed along road segments across southeastern Australia using regression models based on human demographic variables. We model exposure by predicting suitable habitat for our case study species (Eastern Grey Kangaroo Macropus giganteus) based on existing fauna survey records and geographic and climatic variables. Records of reported kangaroo-vehicle collisions are used to investigate how these factors collectively contribute to collision risk. The species occurrence (exposure) model generated plausible predictions across the study area, reducing the null deviance by 30.4%. The vehicle (hazard) models explained 54.7% variance in the traffic volume data and 58.7% in the traffic speed data. Using these as predictors of collision risk explained 23.7% of the deviance in incidence of collisions. Discrimination ability of the model was good when predicting to an independent dataset. The research demonstrates that collision risks can be modeled across geographic space with a conceptual analytical framework using existing sources of data, reducing the need for expensive or time-consuming field data collection. The framework is novel because it disentangles natural and anthropogenic effects on the likelihood of wildlife-vehicle collisions by representing hazard and exposure with separate, tunable submodels.
Graf, John F; Scholz, Bernhard J; Zavodszky, Maria I
2012-02-01
We developed a detailed, whole-body physiologically based pharmacokinetic (PBPK) modeling tool for calculating the distribution of pharmaceutical agents in the various tissues and organs of a human or animal as a function of time. Ordinary differential equations (ODEs) represent the circulation of body fluids through organs and tissues at the macroscopic level, and the biological transport mechanisms and biotransformations within cells and their organelles at the molecular scale. Each major organ in the body is modeled as composed of one or more tissues. Tissues are made up of cells and fluid spaces. The model accounts for the circulation of arterial and venous blood as well as lymph. Since its development was fueled by the need to accurately predict the pharmacokinetic properties of imaging agents, BioDMET is more complex than most PBPK models. The anatomical details of the model are important for the imaging simulation endpoints. Model complexity has also been crucial for quickly adapting the tool to different problems without the need to generate a new model for every problem. When simpler models are preferred, the non-critical compartments can be dynamically collapsed to reduce unnecessary complexity. BioDMET has been used for imaging feasibility calculations in oncology, neurology, cardiology, and diabetes. For this purpose, the time concentration data generated by the model is inputted into a physics-based image simulator to establish imageability criteria. These are then used to define agent and physiology property ranges required for successful imaging. BioDMET has lately been adapted to aid the development of antimicrobial therapeutics. Given a range of built-in features and its inherent flexibility to customization, the model can be used to study a variety of pharmacokinetic and pharmacodynamic problems such as the effects of inter-individual differences and disease-states on drug pharmacokinetics and pharmacodynamics, dosing optimization, and inter
In recent years, a new class of enclosed, closed-path gas analyzers suitable for eddy covariance applications has come to market, designed to combine the advantages of traditional closed-path systems (small density corrections, good performance in poor weather) and open-path syst...
In recent years, a new class of enclosed, closed-path gas analyzers suitable for eddy covariance applications has come to market, designed to combine the advantages of traditional closed-path systems (small density corrections, good performance in poor weather) and open-path syst...
Cadmium - a complex environmental problem. Part II. Cadmium in sludges used as fertilizer.
Davis, R D
1984-02-15
In intensively populated countries efficient sewage treatment is essential to protect river quality. An inevitable by-product is sewage sludge which has to be disposed of safely and economically. Utilisation of sludge as a fertilizer of agricultural land is the most economic disposal route for inland sewage-treatment works and also benefits farmers by providing a cheap manure. Much of the cadmium in wastewater is concentrated into sludge which consequently contains higher concentrations of cadmium than soil does. It is impracticable to reduce cadmium concentrations in sludge below certain levels. When sludge is used on farmland rates of application must be controlled so that cadmium concentrations in soil never reach levels that could significantly contaminate food crops. Cadmium is a principal factor limiting the use of sludge on land. Nevertheless, it is a local problem since agricultural land in general receives more cadmium from aerial deposition and phosphatic fertilizers. The significance of accumulations of cadmium in soil depends mainly on its availability for crop uptake. Investigations are described which have attempted to identify and to determine the availability of forms of cadmium in soil. There is considerable research interest in cadmium in soil solution which is likely to be directly available for crop uptake. Another area of interest is the apparent disappearance of cadmium from sludge-treated soil. Soil analysis often cannot fully account for the cadmium added in sludge. Apart from the effect of soil conditions, especially pH value, crop uptake varies according to the particular crop examined. Highest concentrations of cadmium occur in tobacco, lettuce, spinach and other leafy vegetables. Using crop uptake data from field trials it is possible to relate potential human dietary intake of cadmium, on which hazard depends, to soil concentrations of cadmium, which can be controlled by regulating applications of sludge. This provides an objective
NASA Astrophysics Data System (ADS)
Doganca Kucuk, Zerrin; Saysel, Ali Kerem
2017-03-01
A systems-based classroom intervention on environmental education was designed for seventh grade students; the results were evaluated to see its impact on the development of systems thinking skills and standard science achievement and whether the systems approach is a more effective way to teach environmental issues that are dynamic and complex. A quasi-experimental methodology was used to compare performances of the participants in various dimensions, including systems thinking skills, competence in dynamic environmental problem solving and success in science achievement tests. The same pre-, post- and delayed tests were used with both the comparison and experimental groups in the same public middle school in Istanbul. Classroom activities designed for the comparison group (N = 20) followed the directives of the Science and Technology Curriculum, while the experimental group (N = 22) covered the same subject matter through activities benefiting from systems tools and representations such as behaviour over time graphs, causal loop diagrams, stock-flow structures and hands-on dynamic modelling. After a one-month systems-based instruction, the experimental group demonstrated significantly better systems thinking and dynamic environmental problem solving skills. Achievement in dynamic problem solving was found to be relatively stable over time. However, standard science achievement did not improve at all. This paper focuses on the quantitative analysis of the results, the weaknesses of the curriculum and educational implications.
Everink, Irma H J; van Haastregt, Jolanda C M; Maessen, Jose M C; Schols, Jos M G A; Kempen, Gertrudis I J M
2017-01-13
An integrated care pathway in geriatric rehabilitation was developed to improve coordination and continuity of care for community-living older adults in the Netherlands, who go through the process of hospital admission, admission to a geriatric rehabilitation facility and discharge back to the home situation. This pathway is a complex intervention and is focused on improving communication, triage and transfers of patients between the hospital, geriatric rehabilitation facility and primary care organisations. A process evaluation was performed to assess the feasibility of this pathway. The study design incorporated mixed methods. Feasibility was assessed thru if the pathway was implemented according to plan (fidelity and dose delivered), (b) if patients, informal caregivers and professionals were satisfied with the pathway (dose received) and (c) which barriers and facilitators influenced implementation (context). These components were derived from the theoretical framework of Saunders and colleagues. Data were collected using three structured face-to-face interviews with patients, self-administered questionnaires among informal caregivers, and group interviews with professionals. Furthermore, data were collected from the information transfer system in the hospital, patient files of the geriatric rehabilitation facility and minutes of evaluation meetings. In total, 113 patients, 37 informal caregivers and 19 healthcare professionals participated in this process evaluation. The pathway was considered largely feasible as two components were fully implemented according to plan and two components were largely implemented according to plan. The timing and quality of medical discharge summaries were not sufficiently implemented according to plan and professionals indicated that the triage instrument needed refinement. Healthcare professionals were satisfied with the implementation of the pathway and they indicated that due to improved collaboration, the quality of care
NASA Astrophysics Data System (ADS)
Bailey, B.; Stoll, R., II; Miller, N. E.; Pardyjak, E.; Mahaffee, W.
2014-12-01
Plants cover the majority of Earth's land surface, and thus play a critical role in the surface energy balance. Within individual plant communities, the leaf energy balance is a fundamental component of most biophysical processes. Absorbed radiation drives the energy balance and provides the means by which plants produce food. Available energy is partitioned into sensible and latent heat fluxes to determine surface temperature, which strongly influences rates of metabolic activity and growth. The energy balance of an individual leaf is coupled with other leaves in the community through longwave radiation emission and advection through the air. This complex coupling can make scaling models from leaves to whole-canopies difficult, specifically in canopies with complex, heterogeneous geometries. We present a new three-dimensional canopy model that simultaneously resolves sub-tree to whole-canopy scales. The model provides spatially explicit predictions of net radiation exchange, boundary-layer and stomatal conductances, evapotranspiration rates, and ultimately leaf surface temperature. The radiation model includes complex physics such as anisotropic emission and scattering. Radiation calculations are accelerated by leveraging graphics processing unit (GPU) technology, which allows canopy-scale problems to be performed on a standard desktop workstation. Since validating the three-dimensional distribution of leaf temperature can be extremely challenging, we used several independent measurement techniques to quantify errors in measured and modeled values. When compared with measured leaf temperatures, the model gave a mean error of about 2°C, which was close to the estimated measurement uncertainty.
Analyzing failures: The problems and the solutions
Goel, V.S.
1985-01-01
This book presents the papers given at a conference on fatigue, corrosion cracking, fracture mechanics, and failure analysis. Topics considered at the conference included materials failure prevention at the National Bureau of Standards, the failure analysis of a liquid propane gas cylinder, a vent header crack at the Hatch-2 reactor, the application of fracture mechanics to pipeline failure analysis, the microstructural examination of fuel rods subjected to a simulated loss of coolant accident, and a heavy section stainless steel-316 pipe and its weldment with reduced susceptibility to embrittlement for advanced fossil plants.
Studt, Felix; Tuczek, Felix
2006-09-01
Dinitrogen complexes of transition metals exhibit different binding geometries of N2 (end-on terminal, end-on bridging, side-on bridging, side-on end-on bridging), which are investigated by spectroscopy and DFT calculations, analyzing their electronic structure and reactivity. For comparison, a bis(mu-nitrido) complex, where the N--N bond has been split, has been studied as well. Most of these systems are highly covalent, and have strong metal-nitrogen bonds. In the present review, particular emphasis is put on a consideration of the activation of the coordinated dinitrogen ligand, making it susceptible to protonation, reactions with electrophiles or cleavage. In this context, theoretical, structural, and spectroscopic data giving informations on the amount of charge on the N2 unit are presented. The orbital interactions leading to a charge transfer from the metals to the dinitrogen ligand and the charge distribution within the coordinated N2 group are analyzed. Correlations between the binding mode and the observed reactivity of N2 are discussed.