Sample records for highly complex problem

  1. Predicting Development of Mathematical Word Problem Solving Across the Intermediate Grades

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.

    2012-01-01

    This study addressed predictors of the development of word problem solving (WPS) across the intermediate grades. At beginning of 3rd grade, 4 cohorts of students (N = 261) were measured on computation, language, nonverbal reasoning skills, and attentive behavior and were assessed 4 times from beginning of 3rd through end of 5th grade on 2 measures of WPS at low and high levels of complexity. Language skills were related to initial performance at both levels of complexity and did not predict growth at either level. Computational skills had an effect on initial performance in low- but not high-complexity problems and did not predict growth at either level of complexity. Attentive behavior did not predict initial performance but did predict growth in low-complexity, whereas it predicted initial performance but not growth for high-complexity problems. Nonverbal reasoning predicted initial performance and growth for low-complexity WPS, but only growth for high-complexity WPS. This evidence suggests that although mathematical structure is fixed, different cognitive resources may act as limiting factors in WPS development when the WPS context is varied. PMID:23325985

  2. Self-Regulation in the Midst of Complexity: A Case Study of High School Physics Students Engaged in Ill-Structured Problem Solving

    ERIC Educational Resources Information Center

    Milbourne, Jeffrey David

    2016-01-01

    The purpose of this dissertation study was to explore the experiences of high school physics students who were solving complex, ill-structured problems, in an effort to better understand how self-regulatory behavior mediated the project experience. Consistent with Voss, Green, Post, and Penner's (1983) conception of an ill-structured problem in…

  3. Associations between work-related stress in late midlife, educational attainment, and serious health problems in old age: a longitudinal study with over 20 years of follow-up.

    PubMed

    Nilsen, Charlotta; Andel, Ross; Fors, Stefan; Meinow, Bettina; Darin Mattsson, Alexander; Kåreholt, Ingemar

    2014-08-27

    People spend a considerable amount of time at work over the course of their lives, which makes the workplace important to health and aging. However, little is known about the potential long-term effects of work-related stress on late-life health. This study aims to examine work-related stress in late midlife and educational attainment in relation to serious health problems in old age. Data from nationally representative Swedish surveys were used in the analyses (n = 1,502). Follow-up time was 20-24 years. Logistic regressions were used to examine work-related stress (self-reported job demands, job control, and job strain) in relation to serious health problems measured as none, serious problems in one health domain, and serious problems in two or three health domains (complex health problems). While not all results were statistically significant, high job demands were associated with higher odds of serious health problems among women but lower odds of serious health problems among men. Job control was negatively associated with serious health problems. The strongest association in this study was between high job strain and complex health problems. After adjustment for educational attainment some of the associations became statistically nonsignificant. However, high job demands, remained related to lower odds of serious problems in one health domain among men, and low job control remained associated with higher odds of complex health problems among men. High job demands were associated with lower odds of complex health problems among men with low education, but not among men with high education, or among women regardless of level of education. The results underscore the importance of work-related stress for long-term health. Modification to work environment to reduce work stress (e.g., providing opportunities for self-direction/monitoring levels of psychological job demands) may serve as a springboard for the development of preventive strategies to improve public health both before and after retirement.

  4. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224

  5. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  6. Conceptual and procedural knowledge community college students use when solving a complex science problem

    NASA Astrophysics Data System (ADS)

    Steen-Eibensteiner, Janice Lee

    2006-07-01

    A strong science knowledge base and problem solving skills have always been highly valued for employment in the science industry. Skills currently needed for employment include being able to problem solve (Overtoom, 2000). Academia also recognizes the need for effectively teaching students to apply problem solving skills in clinical settings. This thesis investigates how students solve complex science problems in an academic setting in order to inform the development of problem solving skills for the workplace. Students' use of problem solving skills in the form of learned concepts and procedural knowledge was studied as students completed a problem that might come up in real life. Students were taking a community college sophomore biology course, Human Anatomy & Physiology II. The problem topic was negative feedback inhibition of the thyroid and parathyroid glands. The research questions answered were (1) How well do community college students use a complex of conceptual knowledge when solving a complex science problem? (2) What conceptual knowledge are community college students using correctly, incorrectly, or not using when solving a complex science problem? (3) What problem solving procedural knowledge are community college students using successfully, unsuccessfully, or not using when solving a complex science problem? From the whole class the high academic level participants performed at a mean of 72% correct on chapter test questions which was a low average to fair grade of C-. The middle and low academic participants both failed (F) the test questions (37% and 30% respectively); 29% (9/31) of the students show only a fair performance while 71% (22/31) fail. From the subset sample population of 2 students each from the high, middle, and low academic levels selected from the whole class 35% (8/23) of the concepts were used effectively, 22% (5/23) marginally, and 43% (10/23) poorly. Only 1 concept was used incorrectly by 3/6 of the students and identified as a misconception. One of 21 (5%) problem-solving pathway characteristics was used effectively, 7 (33%) marginally, and 13 (62%) poorly. There were very few (0 to 4) problem-solving pathway characteristics used unsuccessfully most were simply not used.

  7. High frequency vibration analysis by the complex envelope vectorization.

    PubMed

    Giannini, O; Carcaterra, A; Sestieri, A

    2007-06-01

    The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.

  8. Word problems: a review of linguistic and numerical factors contributing to their difficulty

    PubMed Central

    Daroczy, Gabriella; Wolska, Magdalena; Meurers, Walt Detmar; Nuerk, Hans-Christoph

    2015-01-01

    Word problems (WPs) belong to the most difficult and complex problem types that pupils encounter during their elementary-level mathematical development. In the classroom setting, they are often viewed as merely arithmetic tasks; however, recent research shows that a number of linguistic verbal components not directly related to arithmetic contribute greatly to their difficulty. In this review, we will distinguish three components of WP difficulty: (i) the linguistic complexity of the problem text itself, (ii) the numerical complexity of the arithmetic problem, and (iii) the relation between the linguistic and numerical complexity of a problem. We will discuss the impact of each of these factors on WP difficulty and motivate the need for a high degree of control in stimuli design for experiments that manipulate WP difficulty for a given age group. PMID:25883575

  9. Complex Problem Solving in L1 Education: Senior High School Students' Knowledge of the Language Problem-Solving Process

    ERIC Educational Resources Information Center

    van Velzen, Joke H.

    2017-01-01

    The solving of reasoning problems in first language (L1) education can produce an understanding of language, and student autonomy in language problem solving, both of which are contemporary goals in senior high school education. The purpose of this study was to obtain a better understanding of senior high school students' knowledge of the language…

  10. Identification and Addressing Reduction-Related Misconceptions

    ERIC Educational Resources Information Center

    Gal-Ezer, Judith; Trakhtenbrot, Mark

    2016-01-01

    Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract…

  11. Developmental problems and their solution for the Space Shuttle main engine alternate liquid oxygen high-pressure turbopump: Anomaly or failure investigation the key

    NASA Astrophysics Data System (ADS)

    Ryan, R.; Gross, L. A.

    1995-05-01

    The Space Shuttle main engine (SSME) alternate high-pressure liquid oxygen pump experienced synchronous vibration and ball bearing life problems that were program threatening. The success of the program hinged on the ability to solve these development problems. The design and solutions to these problems are engirded in the lessons learned and experiences from prior programs, technology programs, and the ability to properly conduct failure or anomaly investigations. The failure investigation determines the problem cause and is the basis for recommending design solutions. For a complex problem, a comprehensive solution requires that formal investigation procedures be used, including fault trees, resolution logic, and action items worked through a concurrent engineering-multidiscipline team. The normal tendency to use an intuitive, cut-and-try approach will usually prove to be costly, both in money and time and will reach a less than optimum, poorly understood answer. The SSME alternate high-pressure oxidizer turbopump development has had two complex problems critical to program success: (1) high synchronous vibrations and (2) excessive ball bearing wear. This paper will use these two problems as examples of this formal failure investigation approach. The results of the team's investigation provides insight into the complexity of the turbomachinery technical discipline interacting/sensitivities and the fine balance of competing investigations required to solve problems and guarantee program success. It is very important to the solution process that maximum use be made of the resources that both the contractor and Government can bring to the problem in a supporting and noncompeting way. There is no place for the not-invented-here attitude. The resources include, but are not limited to: (1) specially skilled professionals; (2) supporting technologies; (3) computational codes and capabilities; and (4) test and manufacturing facilities.

  12. Developmental problems and their solution for the Space Shuttle main engine alternate liquid oxygen high-pressure turbopump: Anomaly or failure investigation the key

    NASA Technical Reports Server (NTRS)

    Ryan, R.; Gross, L. A.

    1995-01-01

    The Space Shuttle main engine (SSME) alternate high-pressure liquid oxygen pump experienced synchronous vibration and ball bearing life problems that were program threatening. The success of the program hinged on the ability to solve these development problems. The design and solutions to these problems are engirded in the lessons learned and experiences from prior programs, technology programs, and the ability to properly conduct failure or anomaly investigations. The failure investigation determines the problem cause and is the basis for recommending design solutions. For a complex problem, a comprehensive solution requires that formal investigation procedures be used, including fault trees, resolution logic, and action items worked through a concurrent engineering-multidiscipline team. The normal tendency to use an intuitive, cut-and-try approach will usually prove to be costly, both in money and time and will reach a less than optimum, poorly understood answer. The SSME alternate high-pressure oxidizer turbopump development has had two complex problems critical to program success: (1) high synchronous vibrations and (2) excessive ball bearing wear. This paper will use these two problems as examples of this formal failure investigation approach. The results of the team's investigation provides insight into the complexity of the turbomachinery technical discipline interacting/sensitivities and the fine balance of competing investigations required to solve problems and guarantee program success. It is very important to the solution process that maximum use be made of the resources that both the contractor and Government can bring to the problem in a supporting and noncompeting way. There is no place for the not-invented-here attitude. The resources include, but are not limited to: (1) specially skilled professionals; (2) supporting technologies; (3) computational codes and capabilities; and (4) test and manufacturing facilities.

  13. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  14. Teaching Problem-Solving Skills to Nuclear Engineering Students

    ERIC Educational Resources Information Center

    Waller, E.; Kaye, M. H.

    2012-01-01

    Problem solving is an essential skill for nuclear engineering graduates entering the workforce. Training in qualitative and quantitative aspects of problem solving allows students to conceptualise and execute solutions to complex problems. Solutions to problems in high consequence fields of study such as nuclear engineering require rapid and…

  15. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  16. Complex Problem Solving in Educational Contexts--Something beyond "g": Concept, Assessment, Measurement Invariance, and Construct Validity

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno

    2013-01-01

    Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…

  17. Diagnostic imaging learning resources evaluated by students and recent graduates.

    PubMed

    Alexander, Kate; Bélisle, Marilou; Dallaire, Sébastien; Fernandez, Nicolas; Doucet, Michèle

    2013-01-01

    Many learning resources can help students develop the problem-solving abilities and clinical skills required for diagnostic imaging. This study explored veterinary students' perceptions of the usefulness of a variety of learning resources. Perceived resource usefulness was measured for different levels of students and for academic versus clinical preparation. Third-year (n=139) and final (fifth) year (n=105) students and recent graduates (n=56) completed questionnaires on perceived usefulness of each resource. Resources were grouped for comparison: abstract/low complexity (e.g., notes, multimedia presentations), abstract/high complexity (e.g., Web-based and film case repositories), concrete/low complexity (e.g., large-group "clicker" workshops), and concrete/high complexity (e.g., small-group interpretation workshops). Lower-level students considered abstract/low-complexity resources more useful for academic preparation and concrete resources more useful for clinical preparation. Higher-level students/recent graduates also considered abstract/low-complexity resources more useful for academic preparation. For all levels, lecture notes were considered highly useful. Multimedia slideshows were an interactive complement to notes. The usefulness of a Web-based case repository was limited by accessibility problems and difficulty. Traditional abstract/low-complexity resources were considered useful for more levels and contexts than expected. Concrete/high-complexity resources need to better represent clinical practice to be considered more useful for clinical preparation.

  18. Complex Problem Solving in Teams: The Impact of Collective Orientation on Team Process Demands.

    PubMed

    Hagemann, Vera; Kluge, Annette

    2017-01-01

    Complex problem solving is challenging and a high-level cognitive process for individuals. When analyzing complex problem solving in teams, an additional, new dimension has to be considered, as teamwork processes increase the requirements already put on individual team members. After introducing an idealized teamwork process model, that complex problem solving teams pass through, and integrating the relevant teamwork skills for interdependently working teams into the model and combining it with the four kinds of team processes (transition, action, interpersonal, and learning processes), the paper demonstrates the importance of fulfilling team process demands for successful complex problem solving within teams. Therefore, results from a controlled team study within complex situations are presented. The study focused on factors that influence action processes, like coordination, such as emergent states like collective orientation, cohesion, and trust and that dynamically enable effective teamwork in complex situations. Before conducting the experiments, participants were divided by median split into two-person teams with either high ( n = 58) or low ( n = 58) collective orientation values. The study was conducted with the microworld C3Fire, simulating dynamic decision making, and acting in complex situations within a teamwork context. The microworld includes interdependent tasks such as extinguishing forest fires or protecting houses. Two firefighting scenarios had been developed, which takes a maximum of 15 min each. All teams worked on these two scenarios. Coordination within the team and the resulting team performance were calculated based on a log-file analysis. The results show that no relationships between trust and action processes and team performance exist. Likewise, no relationships were found for cohesion. Only collective orientation of team members positively influences team performance in complex environments mediated by action processes such as coordination within the team. The results are discussed in relation to previous empirical findings and to learning processes within the team with a focus on feedback strategies.

  19. Complex Problem Solving in Teams: The Impact of Collective Orientation on Team Process Demands

    PubMed Central

    Hagemann, Vera; Kluge, Annette

    2017-01-01

    Complex problem solving is challenging and a high-level cognitive process for individuals. When analyzing complex problem solving in teams, an additional, new dimension has to be considered, as teamwork processes increase the requirements already put on individual team members. After introducing an idealized teamwork process model, that complex problem solving teams pass through, and integrating the relevant teamwork skills for interdependently working teams into the model and combining it with the four kinds of team processes (transition, action, interpersonal, and learning processes), the paper demonstrates the importance of fulfilling team process demands for successful complex problem solving within teams. Therefore, results from a controlled team study within complex situations are presented. The study focused on factors that influence action processes, like coordination, such as emergent states like collective orientation, cohesion, and trust and that dynamically enable effective teamwork in complex situations. Before conducting the experiments, participants were divided by median split into two-person teams with either high (n = 58) or low (n = 58) collective orientation values. The study was conducted with the microworld C3Fire, simulating dynamic decision making, and acting in complex situations within a teamwork context. The microworld includes interdependent tasks such as extinguishing forest fires or protecting houses. Two firefighting scenarios had been developed, which takes a maximum of 15 min each. All teams worked on these two scenarios. Coordination within the team and the resulting team performance were calculated based on a log-file analysis. The results show that no relationships between trust and action processes and team performance exist. Likewise, no relationships were found for cohesion. Only collective orientation of team members positively influences team performance in complex environments mediated by action processes such as coordination within the team. The results are discussed in relation to previous empirical findings and to learning processes within the team with a focus on feedback strategies. PMID:29033886

  20. Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube

    NASA Astrophysics Data System (ADS)

    Zhou, Guangzhao; Xu, Kun; Liu, Feng

    2018-01-01

    The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.

  1. Youthful High School Noncompleters: Enhancing Opportunities for Employment and Education. Information Series No. 316.

    ERIC Educational Resources Information Center

    Martin, Larry G.

    High school noncompletion is a complex problem that requires complex solutions that schools are often inhibited from implementing quickly. Alternative educational services are often necessary for students who are able to complete high school but who find school an unrewarding place in which to learn. A loose network of public and private programs…

  2. Multidimensional Functional Behaviour Assessment within a Problem Analysis Framework.

    ERIC Educational Resources Information Center

    Ryba, Ken; Annan, Jean

    This paper presents a new approach to contextualized problem analysis developed for use with multimodal Functional Behaviour Assessment (FBA) at Massey University in Auckland, New Zealand. The aim of problem analysis is to simplify complex problems that are difficult to understand. It accomplishes this by providing a high order framework that can…

  3. Moving Material into Space Without Rockets.

    ERIC Educational Resources Information Center

    Cheng, R. S.; Trefil, J. S.

    1985-01-01

    In response to conventional rocket demands on fuel supplies, electromagnetic launches were developed to give payloads high velocity using a stationary energy source. Several orbital mechanics problems are solved including a simple problem (radial launch with no rotation) and a complex problem involving air resistance and gravity. (DH)

  4. Bayesian linkage and segregation analysis: factoring the problem.

    PubMed

    Matthysse, S

    2000-01-01

    Complex segregation analysis and linkage methods are mathematical techniques for the genetic dissection of complex diseases. They are used to delineate complex modes of familial transmission and to localize putative disease susceptibility loci to specific chromosomal locations. The computational problem of Bayesian linkage and segregation analysis is one of integration in high-dimensional spaces. In this paper, three available techniques for Bayesian linkage and segregation analysis are discussed: Markov Chain Monte Carlo (MCMC), importance sampling, and exact calculation. The contribution of each to the overall integration will be explicitly discussed.

  5. From Poor Performance to Success under Stress: Working Memory, Strategy Selection, and Mathematical Problem Solving under Pressure

    ERIC Educational Resources Information Center

    Beilock, Sian L.; DeCaro, Marci S.

    2007-01-01

    Two experiments demonstrate how individual differences in working memory (WM) impact the strategies used to solve complex math problems and how consequential testing situations alter strategy use. In Experiment 1, individuals performed multistep math problems under low- or high-pressure conditions and reported their problem-solving strategies.…

  6. Numerical Modeling of Pulsed Electrical Discharges for High-Speed Flow Control

    DTIC Science & Technology

    2012-02-01

    dimensions , and later on more complex problems. Subsequent work compared different physical models for pulsed discharges: one-moment (drift-diffusion with...two dimensions , and later on more complex problems. Subsequent work compared different physical models for pulsed discharges: one-moment (drift...The state of a particle can be specified by its position and velocity. In principal, the motion of a large group of particles can be predicted from

  7. Kinematical simulation of robotic complex operation for implementing full-scale additive technologies of high-end materials, composites, structures, and buildings

    NASA Astrophysics Data System (ADS)

    Antsiferov, S. I.; Eltsov, M. Iu; Khakhalev, P. A.

    2018-03-01

    This paper considers a newly designed electronic digital model of a robotic complex for implementing full-scale additive technologies, funded under a Federal Target Program. The electronic and digital model was used to solve the problem of simulating the movement of a robotic complex using the NX CAD/CAM/CAE system. The virtual mechanism was built and the main assemblies, joints, and drives were identified as part of solving the problem. In addition, the maximum allowed printable area size was identified for the robotic complex, and a simulation of printing a rectangular-shaped article was carried out.

  8. Application of higher-order cepstral techniques in problems of fetal heart signal extraction

    NASA Astrophysics Data System (ADS)

    Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.

    1996-10-01

    Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.

  9. Formation of inclusion complexes between high amylose starch and octadecyl ferulate via steam jet cooking

    USDA-ARS?s Scientific Manuscript database

    Amylose can form inclusion complexes with guest molecules and represents an interesting approach to deliver bioactive molecules. However, ferulic acid has been shown not to form single helical inclusion complexes with amylose. To overcome this problem a ferulic acid ester, octadecyl ferulate, posses...

  10. A Tentative Organizational Schema for Decision-Making Problems.

    ERIC Educational Resources Information Center

    Osborn, William C.; Goodman, Barbara Ettinger

    This report presents the results of research that examined widely diverse decision problems and attempted to specify their common behavior elements. To take into account the psychological complexity of most real-life decision problems, and to develop a tentative organization of decision behavior that will embrace the many, highly diverse types of…

  11. High School Students' Use of Meiosis When Solving Genetics Problems.

    ERIC Educational Resources Information Center

    Wynne, Cynthia F.; Stewart, Jim; Passmore, Cindy

    2001-01-01

    Paints a different picture of students' reasoning with meiosis as they solved complex, computer-generated genetics problems, some of which required them to revise their understanding of meiosis in response to anomalous data. Students were able to develop a rich understanding of meiosis and can utilize that knowledge to solve genetics problems.…

  12. Advising a Bus Company on Number of Needed Buses: How High-School Physics Students' Deal With a "Complex Problem"?

    ERIC Educational Resources Information Center

    Balukovic, Jasmina; Slisko, Josip; Hadzibegovic, Zalkida

    2011-01-01

    Since 2003, international project PISA evaluates 15-year old students in solving problems that include "decision taking", "analysis and design of systems" and "trouble-shooting". This article presents the results of a pilot research conducted with 215 students from first to fourth grade of a high school in Sarajevo…

  13. Multiple Problem-Solving Strategies Provide Insight into Students' Understanding of Open-Ended Linear Programming Problems

    ERIC Educational Resources Information Center

    Sole, Marla A.

    2016-01-01

    Open-ended questions that can be solved using different strategies help students learn and integrate content, and provide teachers with greater insights into students' unique capabilities and levels of understanding. This article provides a problem that was modified to allow for multiple approaches. Students tended to employ high-powered, complex,…

  14. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  15. Studies on hepatic lipidosis and coinciding health and fertility problems of high-producing dairy cows using the "Utrecht fatty liver model of dairy cows". A review.

    PubMed

    Geelen, M J H; Wensing, T

    2006-09-01

    Fatty liver or hepatic lipidosis is a major metabolic disorder of high-producing dairy cows that occurs rather frequently in early lactation and is associated with decreased health, production and fertility. A background section of the review explores reasons why high-producing dairy cows are prone to develop fatty liver post partum. Hepatic lipidosis and coinciding health and fertility problems seriously endanger profitability and longevity of the dairy cow. Results from a great number of earlier epidemiological and clinical studies made it clear that a different approach was needed for elucidation of pathogenesis and etiology of this complex of health problems. There was a need for an adequate animal model in which hepatic lipidosis and production, health and fertility problems could be provoked under controlled conditions. It was hypothesized that overconditioning ante partum and feed restriction post partum might induce lipolysis in adipose tissue and triacylglycerol accumulation in the liver following calving. This consideration formed the basis for the experiments, which resulted in the "Utrecht fatty liver model of dairy cows". In this model, post partum triacylglycerol-lipidosis as well as the whole complex of health and fertility problems are induced under well-controlled conditions. The experimental protocol based on this hypothesis produced in all cases (10 feeding trials with over 150 dairy cattle) the intended result, i.e. all experimental cows developed post partum higher hepatic triacylglycerol concentrations than did control cows. The model was evaluated in biochemical, clinical pathology, immunological, clinical and fertility terms. It turned out that in this model, post partum triacylglycerol-lipidosis as well as the whole complex of health and fertility problems were induced under well-controlled conditions.

  16. Development of an object-oriented finite element program: application to metal-forming and impact simulations

    NASA Astrophysics Data System (ADS)

    Pantale, O.; Caperaa, S.; Rakotomalala, R.

    2004-07-01

    During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.

  17. Increase in the efficiency of a high-speed ramjet on hydrocarbon fuel at the flying vehicle acceleration up to M = 6+

    NASA Astrophysics Data System (ADS)

    Abashev, V. M.; Korabelnikov, A. V.; Kuranov, A. L.; Tretyakov, P. K.

    2017-10-01

    At the analysis of the work process in a ramjet, a complex consideration of the ensemble of problems the solution of which determines the engine efficiency appears reasonable. The main problems are ensuring a high completeness of fuel combustion and minimal hydraulic losses, the reliability of cooling of high-heat areas with the use of the fuel cooling resource, and ensuring the strength of the engine duct elements under non-uniform heat loads due to fuel combustion in complex gas-dynamic flow structures. The fundamental techniques and approaches to the solution of above-noted problems are considered in the present report, their novelty and advantages in comparison with conventional techniques are substantiated. In particular, a technique of the arrangement of an intense (pre-detonation) combustion regime for ensuring a high completeness of fuel combustion and minimal hydraulic losses at a smooth deceleration of a supersonic flow down to the sound velocity using the pulsed-periodic gas-dynamic flow control has been proposed. A technique has been proposed for cooling the high-heat areas, which employs the cooling resource of the hydrocarbon fuel, including the process of the kerosene chemical transformation (conversion) using the nano-catalysts. An analysis has shown that the highly heated structure will operate in the elastic-plastic domain of the behavior of constructional materials, which is directly connected to the engine operation resource. There arise the problems of reducing the ramjet shells depending on deformations. The deformations also lead to a significant influence on the work process in the combustor and, naturally, on the heat transfer process and the performance of catalysts (the action of plastic and elastic deformations of restrained shells). The work presents some results illustrating the presence of identified problems. A conclusion is drawn about the necessity of formulating a complex investigation both with the realization in model experiments and execution of computational and theoretical investigations.

  18. Expert system applications for army vehicle diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halle, R.F.

    1987-01-01

    Bulky manuals, limited training procedures, and complex Automatic Test Equipment are but a few of the problems a mechanic must face when trying to repair many of the military's new and highly complex vehicle systems. Recent technological advances in Expert Systms has given the mechanic the potential to solve many of these problems and to actually enhance his maintenance proficiency. This paper describes both the history of and the future potential of the Expert System and how it could impact on the present military maintenance system.

  19. SOFIA's Choice: Automating the Scheduling of Airborne Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Norvig, Peter (Technical Monitor)

    1999-01-01

    This paper describes the problem of scheduling observations for an airborne telescope. Given a set of prioritized observations to choose from, and a wide range of complex constraints governing legitimate choices and orderings, how can we efficiently and effectively create a valid flight plan which supports high priority observations? This problem is quite different from scheduling problems which are routinely solved automatically in industry. For instance, the problem requires making choices which lead to other choices later, and contains many interacting complex constraints over both discrete and continuous variables. Furthermore, new types of constraints may be added as the fundamental problem changes. As a result of these features, this problem cannot be solved by traditional scheduling techniques. The problem resembles other problems in NASA and industry, from observation scheduling for rovers and other science instruments to vehicle routing. The remainder of the paper is organized as follows. In 2 we describe the observatory in order to provide some background. In 3 we describe the problem of scheduling a single flight. In 4 we compare flight planning and other scheduling problems and argue that traditional techniques are not sufficient to solve this problem. We also mention similar complex scheduling problems which may benefit from efforts to solve this problem. In 5 we describe an approach for solving this problem based on research into a similar problem, that of scheduling observations for a space-borne probe. In 6 we discuss extensions of the flight planning problem as well as other problems which are similar to flight planning. In 7 we conclude and discuss future work.

  20. Data based identification and prediction of nonlinear and complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-07-01

    The problem of reconstructing nonlinear and complex dynamical systems from measured data or time series is central to many scientific disciplines including physical, biological, computer, and social sciences, as well as engineering and economics. The classic approach to phase-space reconstruction through the methodology of delay-coordinate embedding has been practiced for more than three decades, but the paradigm is effective mostly for low-dimensional dynamical systems. Often, the methodology yields only a topological correspondence of the original system. There are situations in various fields of science and engineering where the systems of interest are complex and high dimensional with many interacting components. A complex system typically exhibits a rich variety of collective dynamics, and it is of great interest to be able to detect, classify, understand, predict, and control the dynamics using data that are becoming increasingly accessible due to the advances of modern information technology. To accomplish these goals, especially prediction and control, an accurate reconstruction of the original system is required. Nonlinear and complex systems identification aims at inferring, from data, the mathematical equations that govern the dynamical evolution and the complex interaction patterns, or topology, among the various components of the system. With successful reconstruction of the system equations and the connecting topology, it may be possible to address challenging and significant problems such as identification of causal relations among the interacting components and detection of hidden nodes. The "inverse" problem thus presents a grand challenge, requiring new paradigms beyond the traditional delay-coordinate embedding methodology. The past fifteen years have witnessed rapid development of contemporary complex graph theory with broad applications in interdisciplinary science and engineering. The combination of graph, information, and nonlinear dynamical systems theories with tools from statistical physics, optimization, engineering control, applied mathematics, and scientific computing enables the development of a number of paradigms to address the problem of nonlinear and complex systems reconstruction. In this Review, we describe the recent advances in this forefront and rapidly evolving field, with a focus on compressive sensing based methods. In particular, compressive sensing is a paradigm developed in recent years in applied mathematics, electrical engineering, and nonlinear physics to reconstruct sparse signals using only limited data. It has broad applications ranging from image compression/reconstruction to the analysis of large-scale sensor networks, and it has become a powerful technique to obtain high-fidelity signals for applications where sufficient observations are not available. We will describe in detail how compressive sensing can be exploited to address a diverse array of problems in data based reconstruction of nonlinear and complex networked systems. The problems include identification of chaotic systems and prediction of catastrophic bifurcations, forecasting future attractors of time-varying nonlinear systems, reconstruction of complex networks with oscillatory and evolutionary game dynamics, detection of hidden nodes, identification of chaotic elements in neuronal networks, reconstruction of complex geospatial networks and nodal positioning, and reconstruction of complex spreading networks with binary data.. A number of alternative methods, such as those based on system response to external driving, synchronization, and noise-induced dynamical correlation, will also be discussed. Due to the high relevance of network reconstruction to biological sciences, a special section is devoted to a brief survey of the current methods to infer biological networks. Finally, a number of open problems including control and controllability of complex nonlinear dynamical networks are discussed. The methods outlined in this Review are principled on various concepts in complexity science and engineering such as phase transitions, bifurcations, stabilities, and robustness. The methodologies have the potential to significantly improve our ability to understand a variety of complex dynamical systems ranging from gene regulatory systems to social networks toward the ultimate goal of controlling such systems.

  1. Fast angular synchronization for phase retrieval via incomplete information

    NASA Astrophysics Data System (ADS)

    Viswanathan, Aditya; Iwen, Mark

    2015-08-01

    We consider the problem of recovering the phase of an unknown vector, x ∈ ℂd, given (normalized) phase difference measurements of the form xjxk*/|xjxk*|, j,k ∈ {1,...,d}, and where xj* denotes the complex conjugate of xj. This problem is sometimes referred to as the angular synchronization problem. This paper analyzes a linear-time-in-d eigenvector-based angular synchronization algorithm and studies its theoretical and numerical performance when applied to a particular class of highly incomplete and possibly noisy phase difference measurements. Theoretical results are provided for perfect (noiseless) measurements, while numerical simulations demonstrate the robustness of the method to measurement noise. Finally, we show that this angular synchronization problem and the specific form of incomplete phase difference measurements considered arise in the phase retrieval problem - where we recover an unknown complex vector from phaseless (or magnitude) measurements.

  2. A Collaborative Problem-Solving Process through Environmental Field Studies

    ERIC Educational Resources Information Center

    Kim, Mijung; Tan, Hoe Teck

    2013-01-01

    This study explored and documented students' responses to opportunities for collective knowledge building and collaboration in a problem-solving process within complex environmental challenges and pressing issues with various dimensions of knowledge and skills. Middle-school students ("n" =?16; age 14) and high-school students…

  3. Ability of the Child Behavior Checklist-Dysregulation Profile and the Youth Self Report-Dysregulation Profile to identify serious psychopathology and association with correlated problems in high-risk children and adolescents.

    PubMed

    Dölitzsch, Claudia; Kölch, Michael; Fegert, Jörg M; Schmeck, Klaus; Schmid, Marc

    2016-11-15

    The current analyses examined whether the dysregulation profile (DP) 1) could be used to identify children and adolescents at high risk for complex and serious psychopathology and 2) was correlated to other emotional and behavioral problems (such as delinquent behavior or suicide ideation). DP was assessed using both the Child Behavior Checklist (CBCL) and the Youth Self Report (YSR) in a residential care sample. Children and adolescents (N=374) aged 10-18 years living in residential care in Switzerland completed the YSR, and their professional caregivers completed the CBCL. Participants meeting criteria for DP (T-score ≥67 on the anxious/‌depressed, attention problems, and aggressive behavior scales of the YSR/CBCL) were compared against those who did not for the presence of complex psychopathology (defined as the presence of both emotional and behavioral disorders), and also for the prevalence of several psychiatric diagnoses, suicidal ideation, traumatic experiences, delinquent behaviors, and problems related to quality of life. The diagnostic criteria for CBCL-DP and YSR-DP were met by just 44 (11.8%) and 25 (6.7%) of participants. Only eight participants (2.1%) met the criteria on both instruments. Further analyses were conducted separately for the CBCL-DP and YSR-DP groups. DP was associated with complex psychopathology in only 34.4% of cases according to CBCL and in 60% of cases according to YSR. YSR-DP was somewhat more likely to be associated with psychiatric disorders and associated problems than was the CBCL-DP. Because of the relatively small overlap between the CBCL-DP and YSR-DP, analyses were conducted largely with different samples, likely contributing to the different results. Despite a high rate of psychopathology in the population studied, both the YSR-DP and the CBCL-DP were able to detect only a small proportion of those with complex psychiatric disorders. This result questions the validity of YSR-DP and the CBCL-DP in detecting subjects with complex and serious psychopathology. It is possible that different screening instruments may be more effective. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  5. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  6. Environmental Sensing of Expert Knowledge in a Computational Evolution System for Complex Problem Solving in Human Genetics

    NASA Astrophysics Data System (ADS)

    Greene, Casey S.; Hill, Douglas P.; Moore, Jason H.

    The relationship between interindividual variation in our genomes and variation in our susceptibility to common diseases is expected to be complex with multiple interacting genetic factors. A central goal of human genetics is to identify which DNA sequence variations predict disease risk in human populations. Our success in this endeavour will depend critically on the development and implementation of computational intelligence methods that are able to embrace, rather than ignore, the complexity of the genotype to phenotype relationship. To this end, we have developed a computational evolution system (CES) to discover genetic models of disease susceptibility involving complex relationships between DNA sequence variations. The CES approach is hierarchically organized and is capable of evolving operators of any arbitrary complexity. The ability to evolve operators distinguishes this approach from artificial evolution approaches using fixed operators such as mutation and recombination. Our previous studies have shown that a CES that can utilize expert knowledge about the problem in evolved operators significantly outperforms a CES unable to use this knowledge. This environmental sensing of external sources of biological or statistical knowledge is important when the search space is both rugged and large as in the genetic analysis of complex diseases. We show here that the CES is also capable of evolving operators which exploit one of several sources of expert knowledge to solve the problem. This is important for both the discovery of highly fit genetic models and because the particular source of expert knowledge used by evolved operators may provide additional information about the problem itself. This study brings us a step closer to a CES that can solve complex problems in human genetics in addition to discovering genetic models of disease.

  7. A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems

    NASA Astrophysics Data System (ADS)

    Liu, X.; Banerjee, J. R.

    2017-03-01

    A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.

  8. A Conceptual Approach to Absolute Value Equations and Inequalities

    ERIC Educational Resources Information Center

    Ellis, Mark W.; Bryson, Janet L.

    2011-01-01

    The absolute value learning objective in high school mathematics requires students to solve far more complex absolute value equations and inequalities. When absolute value problems become more complex, students often do not have sufficient conceptual understanding to make any sense of what is happening mathematically. The authors suggest that the…

  9. High order solution of Poisson problems with piecewise constant coefficients and interface jumps

    NASA Astrophysics Data System (ADS)

    Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben

    2017-04-01

    We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.

  10. High power densities from high-temperature material interactions. [in thermionic energy conversion and metallic fluid heat pipes

    NASA Technical Reports Server (NTRS)

    Morris, J. F.

    1981-01-01

    Thermionic energy conversion (TEC) and metallic-fluid heat pipes (MFHPs), offering unique advantages in terrestrial and space energy processing by virtue of operating on working-fluid vaporization/condensation cycles that accept great thermal power densities at high temperatures, share complex materials problems. Simplified equations are presented that verify and solve such problems, suggesting the possibility of cost-effective applications in the near term for TEC and MFHP devices. Among the problems discussed are: the limitation of alkali-metal corrosion, protection against hot external gases, external and internal vaporization, interfacial reactions and diffusion, expansion coefficient matching, and creep deformation.

  11. Formation of inclusion complexes between high amylose starch and octadecyl ferulate via steam jet cooking.

    PubMed

    Kenar, James A; Compton, David L; Little, Jeanette A; Peterson, Steve C

    2016-04-20

    Amylose-ligand inclusion complexes represent an interesting approach to deliver bioactive molecules. However, ferulic acid has been shown not to form single helical inclusion complexes with amylose from high amylose maize starch. To overcome this problem a lipophilic ferulic acid ester, octadecyl ferulate, was prepared and complexed with amylose via excess steam jet cooking. Jet-cooking octadecyl ferulate and high amylose starch gave an amylose-octadecyl ferulate inclusion complex in 51.0% isolated yield. X-ray diffraction (XRD) and differential scanning calorimetry (DSC) confirmed that a 61 V-type inclusion complex was formed. Amylose and extraction assays showed the complex to be enriched in amylose (91.9±4.3%) and contain 70.6±5.6mgg(-1) octadecyl ferulate, although, minor hydrolysis (∼4%) of the octadecyl ferulate was observed under the excess steam jet-cooking conditions utilized. This study demonstrates that steam jet cooking is a rapid and scalable process in which to prepare amylose-octadecyl ferulate inclusion complexes. Published by Elsevier Ltd.

  12. Aggregate absorption in HMA mixtures.

    DOT National Transportation Integrated Search

    2013-12-01

    Designing hot mix asphalt (HMA) that will perform for many years is a complex balancing act of selecting an : appropriate design asphalt binder content that is sufficiently high to provide durability but not so high as to lead : to rutting problems. ...

  13. Apollo Experiment Report: Lunar-Sample Processing in the Lunar Receiving Laboratory High-Vacuum Complex

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1976-01-01

    A high-vacuum complex composed of an atmospheric decontamination system, sample-processing chambers, storage chambers, and a transfer system was built to process and examine lunar material while maintaining quarantine status. Problems identified, equipment modifications, and procedure changes made for Apollo 11 and 12 sample processing are presented. The sample processing experiences indicate that only a few operating personnel are required to process the sample efficiently, safely, and rapidly in the high-vacuum complex. The high-vacuum complex was designed to handle the many contingencies, both quarantine and scientific, associated with handling an unknown entity such as the lunar sample. Lunar sample handling necessitated a complex system that could not respond rapidly to changing scientific requirements as the characteristics of the lunar sample were better defined. Although the complex successfully handled the processing of Apollo 11 and 12 lunar samples, the scientific requirement for vacuum samples was deleted after the Apollo 12 mission just as the vacuum system was reaching its full potential.

  14. Exploring New Physics Frontiers Through Numerical Relativity.

    PubMed

    Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Sperhake, Ulrich

    2015-01-01

    The demand to obtain answers to highly complex problems within strong-field gravity has been met with significant progress in the numerical solution of Einstein's equations - along with some spectacular results - in various setups. We review techniques for solving Einstein's equations in generic spacetimes, focusing on fully nonlinear evolutions but also on how to benchmark those results with perturbative approaches. The results address problems in high-energy physics, holography, mathematical physics, fundamental physics, astrophysics and cosmology.

  15. Granular support vector machines with association rules mining for protein homology prediction.

    PubMed

    Tang, Yuchun; Jin, Bo; Zhang, Yan-Qing

    2005-01-01

    Protein homology prediction between protein sequences is one of critical problems in computational biology. Such a complex classification problem is common in medical or biological information processing applications. How to build a model with superior generalization capability from training samples is an essential issue for mining knowledge to accurately predict/classify unseen new samples and to effectively support human experts to make correct decisions. A new learning model called granular support vector machines (GSVM) is proposed based on our previous work. GSVM systematically and formally combines the principles from statistical learning theory and granular computing theory and thus provides an interesting new mechanism to address complex classification problems. It works by building a sequence of information granules and then building support vector machines (SVM) in some of these information granules on demand. A good granulation method to find suitable granules is crucial for modeling a GSVM with good performance. In this paper, we also propose an association rules-based granulation method. For the granules induced by association rules with high enough confidence and significant support, we leave them as they are because of their high "purity" and significant effect on simplifying the classification task. For every other granule, a SVM is modeled to discriminate the corresponding data. In this way, a complex classification problem is divided into multiple smaller problems so that the learning task is simplified. The proposed algorithm, here named GSVM-AR, is compared with SVM by KDDCUP04 protein homology prediction data. The experimental results show that finding the splitting hyperplane is not a trivial task (we should be careful to select the association rules to avoid overfitting) and GSVM-AR does show significant improvement compared to building one single SVM in the whole feature space. Another advantage is that the utility of GSVM-AR is very good because it is easy to be implemented. More importantly and more interestingly, GSVM provides a new mechanism to address complex classification problems.

  16. Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu

    Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.

  17. On the importance of the Cerulean and Golden-winged Warblers summits in the National Federation of Coffee Growers of Colombia in Bogotá

    Treesearch

    Paul B. Hamel

    2008-01-01

    Cerulean Warbler is a bird with problems; this migratorybird lives in environments on which large numbersof people depend for an adequate productivelivelihood, energy, high quality wood products, coffee,and cacao. Solving the biological problems of thisspecies in its complex...

  18. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

  19. Structural mapping in statistical word problems: A relational reasoning approach to Bayesian inference.

    PubMed

    Johnson, Eric D; Tubau, Elisabet

    2017-06-01

    Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations.

  20. Applying Cognitive Science to Education: Thinking and Learning in Scientific and Other Complex Domains

    ERIC Educational Resources Information Center

    Reif, Frederick

    2008-01-01

    Many students find it difficult to learn the kinds of knowledge and thinking required by college or high school courses in mathematics, science, or other complex domains. Thus they often emerge with significant misconceptions, fragmented knowledge, and inadequate problem-solving skills. Most instructors or textbook authors approach their teaching…

  1. The effects of critical thinking instruction on training complex decision making.

    PubMed

    Helsdingen, Anne S; van den Bosch, Karel; van Gog, Tamara; van Merriënboer, Jeroen J G

    2010-08-01

    Two field studies assessed the effects of critical thinking instruction on training and transfer of a complex decision-making skill. Critical thinking instruction is based on studies of how experienced decision makers approach complex problems. Participants conducted scenario-based exercises in both simplified (Study I) and high-fidelity (Study 2) training environments. In both studies, half of the participants received instruction in critical thinking. The other half conducted the same exercises but without critical thinking instruction. After the training, test scenarios were administered to both groups. The first study showed that critical thinking instruction enhanced decision outcomes during both training and the test. In the second study, critical thinking instruction benefited both decision outcomes and processes, specifically on the transfer to untrained problems. The results suggest that critical thinking instruction improves decision strategy and enhances understanding of the general principles of the domain. The results of this study warrant the implementation of critical thinking instruction in training programs for professional decision makers that have to operate in complex and highly interactive, dynamic environments.

  2. Klinefelter Syndrome (KS): Other FAQs

    MedlinePlus

    ... before entrance to middle/high school Difficulty with math at all ages Testing to identify problem areas and remediation for math disabilities Difficulty with complex language processing, specifically with ...

  3. Solving the quantum many-body problem with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Carleo, Giuseppe; Troyer, Matthias

    2017-02-01

    The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. Our approach achieves high accuracy in describing prototypical interacting spins models in one and two dimensions.

  4. Complex Problem Solving: What It Is and What It Is Not

    PubMed Central

    Dörner, Dietrich; Funke, Joachim

    2017-01-01

    Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems. PMID:28744242

  5. A controlled genetic algorithm by fuzzy logic and belief functions for job-shop scheduling.

    PubMed

    Hajri, S; Liouane, N; Hammadi, S; Borne, P

    2000-01-01

    Most scheduling problems are highly complex combinatorial problems. However, stochastic methods such as genetic algorithm yield good solutions. In this paper, we present a controlled genetic algorithm (CGA) based on fuzzy logic and belief functions to solve job-shop scheduling problems. For better performance, we propose an efficient representational scheme, heuristic rules for creating the initial population, and a new methodology for mixing and computing genetic operator probabilities.

  6. Coordinating complex problem-solving among distributed intelligent agents

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1992-01-01

    A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.

  7. Getting Along: Negotiating Authority in High Schools. Final Report.

    ERIC Educational Resources Information Center

    Farrar, Eleanor; Neufeld, Barbara

    Appropriate responses to the authority problem in schools can be informed by a more complex understanding of the issue. Also of importance is knowledge of the ways in which schools and society at large are involved with both the creation of and the solution to the problem of student/teacher authority relations. School people are referring…

  8. How Dogmatic Beliefs Harm Creativity and Higher-Level Thinking. Educational Psychology Series

    ERIC Educational Resources Information Center

    Ambrose, Don, Ed.; Sternberg, Robert J., Ed.

    2011-01-01

    In a world plagued by enormous, complex problems requiring long-range vision and interdisciplinary insights, the need to attend to the influence of dogmatic thinking on the development of high ability and creative intelligence is pressing. This volume introduces the problem of dogmatism broadly, explores the nature and nuances of dogmatic thinking…

  9. Nonlinear functional approximation with networks using adaptive neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1992-01-01

    A novel mathematical framework for the rapid learning of nonlinear mappings and topological transformations is presented. It is based on allowing the neuron's parameters to adapt as a function of learning. This fully recurrent adaptive neuron model (ANM) has been successfully applied to complex nonlinear function approximation problems such as the highly degenerate inverse kinematics problem in robotics.

  10. The current state of drug discovery and a potential role for NMR metabolomics.

    PubMed

    Powers, Robert

    2014-07-24

    The pharmaceutical industry has significantly contributed to improving human health. Drugs have been attributed to both increasing life expectancy and decreasing health care costs. Unfortunately, there has been a recent decline in the creativity and productivity of the pharmaceutical industry. This is a complex issue with many contributing factors resulting from the numerous mergers, increase in out-sourcing, and the heavy dependency on high-throughput screening (HTS). While a simple solution to such a complex problem is unrealistic and highly unlikely, the inclusion of metabolomics as a routine component of the drug discovery process may provide some solutions to these problems. Specifically, as the binding affinity of a chemical lead is evolved during the iterative structure-based drug design process, metabolomics can provide feedback on the selectivity and the in vivo mechanism of action. Similarly, metabolomics can be used to evaluate and validate HTS leads. In effect, metabolomics can be used to eliminate compounds with potential efficacy and side effect problems while prioritizing well-behaved leads with druglike characteristics.

  11. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  12. Planning and Realization of Complex Intentions in Traumatic Brain Injury and Normal Aging

    ERIC Educational Resources Information Center

    Kliegel, Matthias; Eschen, Anne; Thone-Otto, Angelika I. T.

    2004-01-01

    The realization of delayed intentions (i.e., prospective memory) is a highly complex process composed of four phases: intention formation, retention, re-instantiation, and execution. The aim of this study was to investigate if executive functioning impairments are related to problems in the formation, re-instantiation, and execution of a delayed…

  13. Designing collective behavior in a termite-inspired robot construction team.

    PubMed

    Werfel, Justin; Petersen, Kirstin; Nagpal, Radhika

    2014-02-14

    Complex systems are characterized by many independent components whose low-level actions produce collective high-level results. Predicting high-level results given low-level rules is a key open challenge; the inverse problem, finding low-level rules that give specific outcomes, is in general still less understood. We present a multi-agent construction system inspired by mound-building termites, solving such an inverse problem. A user specifies a desired structure, and the system automatically generates low-level rules for independent climbing robots that guarantee production of that structure. Robots use only local sensing and coordinate their activity via the shared environment. We demonstrate the approach via a physical realization with three autonomous climbing robots limited to onboard sensing. This work advances the aim of engineering complex systems that achieve specific human-designed goals.

  14. Crew collaboration in space: a naturalistic decision-making perspective

    NASA Technical Reports Server (NTRS)

    Orasanu, Judith

    2005-01-01

    Successful long-duration space missions will depend on the ability of crewmembers to respond promptly and effectively to unanticipated problems that arise under highly stressful conditions. Naturalistic decision making (NDM) exploits the knowledge and experience of decision makers in meaningful work domains, especially complex sociotechnical systems, including aviation and space. Decision making in these ambiguous, dynamic, high-risk environments is a complex task that involves defining the nature of the problem and crafting a response to achieve one's goals. Goal conflicts, time pressures, and uncertain outcomes may further complicate the process. This paper reviews theory and research pertaining to the NDM model and traces some of the implications for space crews and other groups that perform meaningful work in extreme environments. It concludes with specific recommendations for preparing exploration crews to use NDM effectively.

  15. Creation of parallel algorithms for the solution of problems of gas dynamics on multi-core computers and GPU

    NASA Astrophysics Data System (ADS)

    Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.

    2013-10-01

    The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].

  16. Identification and addressing reduction-related misconceptions

    NASA Astrophysics Data System (ADS)

    Gal-Ezer, Judith; Trakhtenbrot, Mark

    2016-07-01

    Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract technique that involves revealing close non-trivial connections between problems that often seem to have nothing in common. As a result, proper understanding and application of reduction is a serious challenge for students and a source of numerous misconceptions. The main contribution of this paper is detection of such misconceptions, analysis of their roots, and proposing a way to address them in an undergraduate TCC course. Our observations suggest that the main source of the misconceptions is the false intuitive rule "the bigger is a set/problem, the harder it is to solve". Accordingly, we developed a series of exercises for proactive prevention of these misconceptions.

  17. Meshless Method for Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow problems. To solve this discontinuity problem, this research study deals with the implementation of a conservative meshless method and its applications in computational fluid dynamics (CFD). One of the most common types of collocating meshless method the RBF-DQ, is used to approximate the spatial derivatives. The issue with meshless methods when dealing with highly convective cases is that they cannot distinguish the influence of fluid flow from upstream or downstream and some methodology is needed to make the scheme stable. Therefore, an upwinding scheme similar to one used in the finite volume method is added to capture steep gradient or shocks. This scheme creates a flexible algorithm within which a wide range of numerical flux schemes, such as those commonly used in the finite volume method, can be employed. In addition, a blended RBF is used to decrease the dissipation ensuing from the use of a low shape parameter. All of these steps are formulated for the Euler equation and a series of test problems used to confirm convergence of the algorithm. The present scheme was first employed on several incompressible benchmarks to validate the framework. The application of this algorithm is illustrated by solving a set of incompressible Navier-Stokes problems. Results from the compressible problem are compared with the exact solution for the flow over a ramp and compared with solutions of finite volume discretization and the discontinuous Galerkin method, both requiring a mesh. The applicability of the algorithm and its robustness are shown to be applied to complex problems.

  18. Advanced solar receiver conceptual design study

    NASA Technical Reports Server (NTRS)

    Kesseli, J. B.; Lacy, D. E.

    1987-01-01

    High temperature solar dynamic Brayton and Stirling receivers are investigated as candidate electrical power generating systems for future LEO missions. These receivers are smaller and more efficient than conventional receivers, and they offer less structural complexity and fewer thermal stress problems. Use of the advanced Direct Absorption Storage Receiver allows many of the problems associated with working with high-volumetric-change phase-change materials to be avoided. A specific mass reduction of about 1/3 with respect to the baseline receiver has been realized.

  19. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  20. Early stage response problem for post-disaster incidents

    NASA Astrophysics Data System (ADS)

    Kim, Sungwoo; Shin, Youngchul; Lee, Gyu M.; Moon, Ilkyeong

    2018-07-01

    Research on evacuation plans for reducing damages and casualties has been conducted to advise defenders against threats. However, despite the attention given to the research in the past, emergency response management, designed to neutralize hazards, has been undermined since planners frequently fail to apprehend the complexities and contexts of the emergency situation. Therefore, this study considers a response problem with unique characteristics for the duration of the emergency. An early stage response problem is identified to find the optimal routing and scheduling plan for responders to prevent further hazards. Due to the complexity of the proposed mathematical model, two algorithms are developed. Data from a high-rise building, called Central City in Seoul, Korea, are used to evaluate the algorithms. Results show that the proposed algorithms can procure near-optimal solutions within a reasonable time.

  1. Human-Centered Aviation Automation: Principles and Guidelines

    NASA Technical Reports Server (NTRS)

    Billings, Charles E.

    1996-01-01

    This document presents principles and guidelines for human-centered automation in aircraft and in the aviation system. Drawing upon operational experience with highly automated aircraft, it describes classes of problems that have occurred in these vehicles, the effects of advanced automation on the human operators of the aviation system, and ways in which these problems may be avoided in the design of future aircraft and air traffic management automation. Many incidents and a few serious accidents suggest that these problems are related to automation complexity, autonomy, coupling, and opacity, or inadequate feedback to operators. An automation philosophy that emphasizes improved communication, coordination and cooperation between the human and machine elements of this complex, distributed system is required to improve the safety and efficiency of aviation operations in the future.

  2. High-frequency CAD-based scattering model: SERMAT

    NASA Astrophysics Data System (ADS)

    Goupil, D.; Boutillier, M.

    1991-09-01

    Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.

  3. A Global Approach to the Optimal Trajectory Based on an Improved Ant Colony Algorithm for Cold Spray

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Chen, Tingyang; Zeng, Chunnian; Guo, Xueping; Lian, Huijuan; Zheng, You; Wei, Xiaoxu

    2016-12-01

    This paper is concerned with finding a global approach to obtain the shortest complete coverage trajectory on complex surfaces for cold spray applications. A slicing algorithm is employed to decompose the free-form complex surface into several small pieces of simple topological type. The problem of finding the optimal arrangement of the pieces is translated into a generalized traveling salesman problem (GTSP). Owing to its high searching capability and convergence performance, an improved ant colony algorithm is then used to solve the GTSP. Through off-line simulation, a robot trajectory is generated based on the optimized result. The approach is applied to coat real components with a complex surface by using the cold spray system with copper as the spraying material.

  4. Medical students' personal experience of high-stakes failure: case studies using interpretative phenomenological analysis.

    PubMed

    Patel, R S; Tarrant, C; Bonas, S; Shaw, R L

    2015-05-12

    Failing a high-stakes assessment at medical school is a major event for those who go through the experience. Students who fail at medical school may be more likely to struggle in professional practice, therefore helping individuals overcome problems and respond appropriately is important. There is little understanding about what factors influence how individuals experience failure or make sense of the failing experience in remediation. The aim of this study was to investigate the complexity surrounding the failure experience from the student's perspective using interpretative phenomenological analysis (IPA). The accounts of three medical students who had failed final re-sit exams, were subjected to in-depth analysis using IPA methodology. IPA was used to analyse each transcript case-by-case allowing the researcher to make sense of the participant's subjective world. The analysis process allowed the complexity surrounding the failure to be highlighted, alongside a narrative describing how students made sense of the experience. The circumstances surrounding students as they approached assessment and experienced failure at finals were a complex interaction between academic problems, personal problems (specifically finance and relationships), strained relationships with friends, family or faculty, and various mental health problems. Each student experienced multi-dimensional issues, each with their own individual combination of problems, but experienced remediation as a one-dimensional intervention with focus only on improving performance in written exams. What these students needed to be included was help with clinical skills, plus social and emotional support. Fear of termination of the their course was a barrier to open communication with staff. These students' experience of failure was complex. The experience of remediation is influenced by the way in which students make sense of failing. Generic remediation programmes may fail to meet the needs of students for whom personal, social and mental health issues are a part of the picture.

  5. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2002-01-01

    In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.

  6. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  7. Conversion of wastelands into state ownership for the needs of high-rise construction

    NASA Astrophysics Data System (ADS)

    Ganebnykh, Elena

    2018-03-01

    High-rise construction in big cities faces the problem of land shortage in downtown areas. Audit of economic complexes showed a large volume of wastelands. The conversion of wastelands into state and municipal ownership helps in part to solve the problem of the lack of space for high-rise construction in the urban area in the format of infill construction. The article investigates the problem of the conversion of wastelands into state and municipal ownership. The research revealed no clear algorithm for converting wastelands into state and municipal ownership. To form a unified system for identifying such plots, a universal algorithm was developed to identify and convert ownerless immovable property into state or municipal ownership.

  8. An Improved Memetic Algorithm for Break Scheduling

    NASA Astrophysics Data System (ADS)

    Widl, Magdalena; Musliu, Nysret

    In this paper we consider solving a complex real life break scheduling problem. This problem of high practical relevance arises in many working areas, e.g. in air traffic control and other fields where supervision personnel is working. The objective is to assign breaks to employees such that various constraints reflecting legal demands or ergonomic criteria are satisfied and staffing requirement violations are minimised.

  9. Remedial Education in Community Colleges: Understanding the Problem and Proposing Solutions. UCLA Community College Bibliography

    ERIC Educational Resources Information Center

    McJunkin, Kyle Stewart

    2005-01-01

    In recent years, community colleges have increasingly taken on the task of providing remedial education to its students. For policymakers and educators, understanding why remediation is on the increase is a frustrating problem made so by the complexity of the causes behind it. Are students graduating from high school less prepared or are academic…

  10. Computer modeling of electromagnetic problems using the geometrical theory of diffraction

    NASA Technical Reports Server (NTRS)

    Burnside, W. D.

    1976-01-01

    Some applications of the geometrical theory of diffraction (GTD), a high frequency ray optical solution to electromagnetic problems, are presented. GTD extends geometric optics, which does not take into account the diffractions occurring at edges, vertices, and various other discontinuities. Diffraction solutions, analysis of basic structures, construction of more complex structures, and coupling using GTD are discussed.

  11. Blended Learning Experience in a Programming Language Course and the Effect of the Thinking Styles of the Students on Success and Motivation

    ERIC Educational Resources Information Center

    Yagci, Mustafa

    2016-01-01

    High-level thinking and problem solving skill is one requirement of computer programming that most of the students experience problems with. Individual differences such as motivation, attitude towards programming, thinking style of the student, and complexity of the programming language have influence on students' success on programming. Thus,…

  12. Aggregation of LoD 1 building models as an optimization problem

    NASA Astrophysics Data System (ADS)

    Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.

    3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.

  13. A Skyscraping Feat

    ERIC Educational Resources Information Center

    Roberts, Sarah A.; Lee, Jean S.

    2013-01-01

    Research shows that the greatest gains in student learning in mathematics classrooms occur in classrooms in which there is sustained use of high cognitive demanding tasks throughout instruction (Boston and Smith 2009). High cognitive demanding tasks, which this article will refer to as rich tasks, are mathematics problems that are complex, less…

  14. Cognitive Load Theory: A Broader View on the Role of Memory in Learning and Education

    ERIC Educational Resources Information Center

    Paas, Fred; Ayres, Paul

    2014-01-01

    According to cognitive load theory (CLT), the limitations of working memory (WM) in the learning of new tasks together with its ability to cooperate with an unlimited long-term memory (LTM) for familiar tasks enable human beings to deal effectively with complex problems and acquire highly complex knowledge and skills. With regard to WM, CLT has…

  15. Self-Regulation in the Midst of Complexity: A Case Study of High School Physics Students Engaged in Ill-Structured Problem Solving

    NASA Astrophysics Data System (ADS)

    Milbourne, Jeffrey David

    The purpose of this dissertation study was to explore the experiences of high school physics students who were solving complex, ill-structured problems, in an effort to better understand how self-regulatory behavior mediated the project experience. Consistent with Voss, Green, Post, and Penner's (1983) conception of an ill-structured problem in the natural sciences, the 'problems' consisted of scientific research projects that students completed under the supervision of a faculty mentor. Zimmerman and Campillo's (2003) self-regulatory framework of problem solving provided a holistic guide to data collection and analysis of this multi-case study, with five individual student cases. The study's results are explored in two manuscripts, each targeting a different audience. The first manuscript, intended for the Science Education Research community, presents a thick, rich description of the students' project experiences, consistent with a qualitative, case study analysis. Findings suggest that intrinsic interest was an important self-regulatory factor that helped motivate students throughout their project work, and that the self-regulatory cycle of forethought, performance monitoring, and self-reflection was an important component of the problem-solving process. Findings also support the application of Zimmerman and Campillo's framework to complex, ill-structured problems, particularly the cyclical nature of the framework. Finally, this study suggests that scientific research projects, with the appropriate support, can be a mechanism for improving students' selfregulatory behavior. The second manuscript, intended for Physics practitioners, combines the findings of the first manuscript with the perspectives of the primary, on-site research mentor, who has over a decade's worth of experience mentoring students doing physics research. His experience suggests that a successful research experience requires certain characteristics, including: a slow, 'on-ramp' to the research experience, space to experience productive failure, and an opportunity to enjoy the work they are doing.

  16. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  17. Robust Design of a Particle-Free Silver-Organo-Complex Ink with High Conductivity and Inkjet Stability for Flexible Electronics.

    PubMed

    Vaseem, Mohammad; McKerricher, Garret; Shamim, Atif

    2016-01-13

    Currently, silver-nanoparticle-based inkjet ink is commercially available. This type of ink has several serious problems such as a complex synthesis protocol, high cost, high sintering temperatures (∼200 °C), particle aggregation, nozzle clogging, poor shelf life, and jetting instability. For the emerging field of printed electronics, these shortcomings in conductive inks are barriers for their widespread use in practical applications. Formulating particle-free silver inks has potential to solve these issues and requires careful design of the silver complexation. The ink complex must meet various requirements, such as in situ reduction, optimum viscosity, storage and jetting stability, smooth uniform sintered films, excellent adhesion, and high conductivity. This study presents a robust formulation of silver-organo-complex (SOC) ink, where complexing molecules act as reducing agents. The 17 wt % silver loaded ink was printed and sintered on a wide range of substrates with uniform surface morphology and excellent adhesion. The jetting stability was monitored for 5 months to confirm that the ink was robust and highly stable with consistent jetting performance. Radio frequency inductors, which are highly sensitive to metal quality, were demonstrated as a proof of concept on flexible PEN substrate. This is a major step toward producing high-quality electronic components with a robust inkjet printing process.

  18. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  19. The Use of Video Cases in a Multimedia Learning Environment for Facilitating High School Students' Inquiry into a Problem from Varying Perspectives

    NASA Astrophysics Data System (ADS)

    Zydney, Janet Mannheimer; Grincewicz, Amy

    2011-12-01

    This study investigated the connection between the use of video cases within a multimedia learning environment and students' inquiry into a socio-scientific problem. The software program was designed based on principles from the Cognitive Flexibility Theory (CFT) and incorporated video cases of experts with differing perspectives. Seventy-nine 10th-grade students in an urban high school participated in this study. After watching the expert videos, students generated investigative questions and reflected on how their ideas changed over time. This study found a significant correlation between the time students spent watching the expert videos and their ability to consider the problem's perspectives as well as their ability to integrate these perspectives within their questions. Moreover, problem-solving ability and time watching the videos were detected as possible influential predictors of students' consideration of the problem's perspectives within their questions. Although students watched all video cases in equivalent ways, one of the video cases, which incorporated multiple perspectives as opposed to just presenting one perspective, appeared most influential in helping students integrate the various perspectives into their own thinking. A qualitative analysis of students' reflections indicated that many students appreciated the complexity, authenticity, and ethical dimensions of the problem. It also revealed that while the majority of students thought critically about the problem, some students still had naïve or simplistic ways of thinking. This study provided some preliminary evidence that offering students the opportunity to watch videos of different perspectives may influence them to think in alternative ways about a complex problem.

  20. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  1. The use of methods of structural optimization at the stage of designing high-rise buildings with steel construction

    NASA Astrophysics Data System (ADS)

    Vasilkin, Andrey

    2018-03-01

    The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.

  2. Hybrid DG/FV schemes for magnetohydrodynamics and relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Núñez-de la Rosa, Jonatan; Munz, Claus-Dieter

    2018-01-01

    This paper presents a high order hybrid discontinuous Galerkin/finite volume scheme for solving the equations of the magnetohydrodynamics (MHD) and of the relativistic hydrodynamics (SRHD) on quadrilateral meshes. In this approach, for the spatial discretization, an arbitrary high order discontinuous Galerkin spectral element (DG) method is combined with a finite volume (FV) scheme in order to simulate complex flow problems involving strong shocks. Regarding the time discretization, a fourth order strong stability preserving Runge-Kutta method is used. In the proposed hybrid scheme, a shock indicator is computed at the beginning of each Runge-Kutta stage in order to flag those elements containing shock waves or discontinuities. Subsequently, the DG solution in these troubled elements and in the current time step is projected onto a subdomain composed of finite volume subcells. Right after, the DG operator is applied to those unflagged elements, which, in principle, are oscillation-free, meanwhile the troubled elements are evolved with a robust second/third order FV operator. With this approach we are able to numerically simulate very challenging problems in the context of MHD and SRHD in one, and two space dimensions and with very high order polynomials. We make convergence tests and show a comprehensive one- and two dimensional testbench for both equation systems, focusing in problems with strong shocks. The presented hybrid approach shows that numerical schemes of very high order of accuracy are able to simulate these complex flow problems in an efficient and robust manner.

  3. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  4. Planning Following Stroke: A Relational Complexity Approach Using the Tower of London

    PubMed Central

    Andrews, Glenda; Halford, Graeme S.; Chappell, Mark; Maujean, Annick; Shum, David H. K.

    2014-01-01

    Planning on the 4-disk version of the Tower of London (TOL4) was examined in stroke patients and unimpaired controls. Overall TOL4 solution scores indicated impaired planning in the frontal stroke but not non-frontal stroke patients. Consistent with the claim that processing the relations between current states, intermediate states, and goal states is a key process in planning, the domain-general relational complexity metric was a good indicator of the experienced difficulty of TOL4 problems. The relational complexity metric shared variance with task-specific metrics of moves to solution and search depth. Frontal stroke patients showed impaired planning compared to controls on problems at all three complexity levels, but at only two of the three levels of moves to solution, search depth and goal ambiguity. Non-frontal stroke patients showed impaired planning only on the most difficult quaternary-relational and high search depth problems. An independent measure of relational processing (viz., Latin square task) predicted TOL4 solution scores after controlling for stroke status and location, and executive processing (Trail Making Test). The findings suggest that planning involves a domain-general capacity for relational processing that depends on the frontal brain regions. PMID:25566042

  5. Solving Complex Problems: A Convergent Approach to Cognitive Load Measurement

    ERIC Educational Resources Information Center

    Zheng, Robert; Cook, Anne

    2012-01-01

    The study challenged the current practices in cognitive load measurement involving complex problem solving by manipulating the presence of pictures in multiple rule-based problem-solving situations and examining the cognitive load resulting from both off-line and online measures associated with complex problem solving. Forty-eight participants…

  6. On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

    PubMed

    Bianchini, Monica; Scarselli, Franco

    2014-08-01

    Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.

  7. From problem solving to problem definition: scrutinizing the complex nature of clinical practice.

    PubMed

    Cristancho, Sayra; Lingard, Lorelei; Regehr, Glenn

    2017-02-01

    In medical education, we have tended to present problems as being singular, stable, and solvable. Problem solving has, therefore, drawn much of medical education researchers' attention. This focus has been important but it is limited in terms of preparing clinicians to deal with the complexity of the 21st century healthcare system in which they will provide team-based care for patients with complex medical illness. In this paper, we use the Soft Systems Engineering principles to introduce the idea that in complex, team-based situations, problems usually involve divergent views and evolve with multiple solution iterations. As such we need to shift the conversation from (1) problem solving to problem definition, and (2) from a problem definition derived exclusively at the level of the individual to a definition derived at the level of the situation in which the problem is manifested. Embracing such a focus on problem definition will enable us to advocate for novel educational practices that will equip trainees to effectively manage the problems they will encounter in complex, team-based healthcare.

  8. Aiding the search: Examining individual differences in multiply-constrained problem solving.

    PubMed

    Ellis, Derek M; Brewer, Gene A

    2018-07-01

    Understanding and resolving complex problems is of vital importance in daily life. Problems can be defined by the limitations they place on the problem solver. Multiply-constrained problems are traditionally examined with the compound remote associates task (CRAT). Performance on the CRAT is partially dependent on an individual's working memory capacity (WMC). These findings suggest that executive processes are critical for problem solving and that there are reliable individual differences in multiply-constrained problem solving abilities. The goals of the current study are to replicate and further elucidate the relation between WMC and CRAT performance. To achieve these goals, we manipulated preexposure to CRAT solutions and measured WMC with complex-span tasks. In Experiment 1, we report evidence that preexposure to CRAT solutions improved problem solving accuracy, WMC was correlated with problem solving accuracy, and that WMC did not moderate the effect of preexposure on problem solving accuracy. In Experiment 2, we preexposed participants to correct and incorrect solutions. We replicated Experiment 1 and found that WMC moderates the effect of exposure to CRAT solutions such that high WMC participants benefit more from preexposure to correct solutions than low WMC (although low WMC participants have preexposure benefits as well). Broadly, these results are consistent with theories of working memory and problem solving that suggest a mediating role of attention control processes. Published by Elsevier Inc.

  9. High-resolution method for evolving complex interface networks

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  10. Redundant interferometric calibration as a complex optimization problem

    NASA Astrophysics Data System (ADS)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  11. Demonstration of a tool for automatic learning and re-use of knowledge in the activated sludge process.

    PubMed

    Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.

  12. NGL Viewer: Web-based molecular graphics for large complexes.

    PubMed

    Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W

    2018-05-29

    The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.

  13. The Current State of Drug Discovery and a Potential Role for NMR Metabolomics

    PubMed Central

    2015-01-01

    The pharmaceutical industry has significantly contributed to improving human health. Drugs have been attributed to both increasing life expectancy and decreasing health care costs. Unfortunately, there has been a recent decline in the creativity and productivity of the pharmaceutical industry. This is a complex issue with many contributing factors resulting from the numerous mergers, increase in out-sourcing, and the heavy dependency on high-throughput screening (HTS). While a simple solution to such a complex problem is unrealistic and highly unlikely, the inclusion of metabolomics as a routine component of the drug discovery process may provide some solutions to these problems. Specifically, as the binding affinity of a chemical lead is evolved during the iterative structure-based drug design process, metabolomics can provide feedback on the selectivity and the in vivo mechanism of action. Similarly, metabolomics can be used to evaluate and validate HTS leads. In effect, metabolomics can be used to eliminate compounds with potential efficacy and side effect problems while prioritizing well-behaved leads with druglike characteristics. PMID:24588729

  14. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  15. [The mind-brain problem (II): about consciousness].

    PubMed

    Tirapu-Ustarroz, J; Goni-Saez, F

    2016-08-16

    Consciousness is the result of a series of neurobiological processes in the brain and is, in turn, a feature of the level of its complexity. In fact, being conscious and being aware place us before what Chalmers called the 'soft problem' and the 'hard problem' of consciousness. The first refers to aspects such as wakefulness, attention or knowledge, while the second is concerned with such complex concepts as self-awareness, 'neural self' or social cognition. In this sense it can be said that the concept of consciousness as a unitary thing poses problems of approaching a highly complex reality. We outline the main models that have addressed the topic of consciousness from a neuroscientific perspective. On the one hand, there are the conscious experience models of Crick, Edelman and Tononi, and Llinas, and, on the other, the models and neuronal bases of self-consciousness by authors such as Damasio (core and extended consciousness), Tulving (autonoetic and noetic consciousness and chronesthesia), the problem of qualia (Dennett, Popper, Ramachandran) and the cognit model (Fuster). All the stimuli we receive from the outside world and from our own internal world are converted and processed by the brain so as to integrate them, and from there they become part of our identity. The perception of a dog and being able to recognise it as such or the understanding of our own consciousness are the result of the functioning of brain, neuronal and synaptic structures. The more complex processes of consciousness, such as self-awareness or empathy, are probably emergent brain processes.

  16. Self-reduction of a copper complex MOD ink for inkjet printing conductive patterns on plastics.

    PubMed

    Farraj, Yousef; Grouchko, Michael; Magdassi, Shlomo

    2015-01-31

    Highly conductive copper patterns on low-cost flexible substrates are obtained by inkjet printing a metal complex based ink. Upon heating the ink, the soluble complex, which is composed of copper formate and 2-amino-2-methyl-1-propanol, decomposes under nitrogen at 140 °C and is converted to pure metallic copper. The decomposition process of the complex is investigated and a suggested mechanism is presented. The ink is stable in air for prolonged periods, with no sedimentation or oxidation problems, which are usually encountered in copper nanoparticle based inks.

  17. Direct migration motion estimation and mode decision to decoder for a low-complexity decoder Wyner-Ziv video coding

    NASA Astrophysics Data System (ADS)

    Lei, Ted Chih-Wei; Tseng, Fan-Shuo

    2017-07-01

    This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.

  18. Nuclear Forensics and Radiochemistry: Reaction Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rundberg, Robert S.

    In the intense neutron flux of a nuclear explosion the production of isotopes may occur through successive neutron induced reactions. The pathway to these isotopes illustrates both the complexity of the problem and the need for high quality nuclear data. The growth and decay of radioactive isotopes can follow a similarly complex network. The Bateman equation will be described and modified to apply to the transmutation of isotopes in a high flux reactor. A alternative model of growth and decay, the GD code, that can be applied to fission products will also be described.

  19. An improved classification tree analysis of high cost modules based upon an axiomatic definition of complexity

    NASA Technical Reports Server (NTRS)

    Tian, Jianhui; Porter, Adam; Zelkowitz, Marvin V.

    1992-01-01

    Identification of high cost modules has been viewed as one mechanism to improve overall system reliability, since such modules tend to produce more than their share of problems. A decision tree model was used to identify such modules. In this current paper, a previously developed axiomatic model of program complexity is merged with the previously developed decision tree process for an improvement in the ability to identify such modules. This improvement was tested using data from the NASA Software Engineering Laboratory.

  20. A High-Order Immersed Boundary Method for Acoustic Wave Scattering and Low-Mach Number Flow-Induced Sound in Complex Geometries

    PubMed Central

    Seo, Jung Hee; Mittal, Rajat

    2010-01-01

    A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129

  1. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-09-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  2. Amoeba-inspired nanoarchitectonic computing: solving intractable computational problems using nanoscale photoexcitation transfer dynamics.

    PubMed

    Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko

    2013-06-18

    Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.

  3. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  4. Low-Complexity User Selection for Rate Maximization in MIMO Broadcast Channels with Downlink Beamforming

    PubMed Central

    Silva, Adão; Gameiro, Atílio

    2014-01-01

    We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928

  5. The Strength of the Strongest Ties in Collaborative Problem Solving

    NASA Astrophysics Data System (ADS)

    de Montjoye, Yves-Alexandre; Stopczynski, Arkadiusz; Shmueli, Erez; Pentland, Alex; Lehmann, Sune

    2014-06-01

    Complex problem solving in science, engineering, and business has become a highly collaborative endeavor. Teams of scientists or engineers collaborate on projects using their social networks to gather new ideas and feedback. Here we bridge the literature on team performance and information networks by studying teams' problem solving abilities as a function of both their within-team networks and their members' extended networks. We show that, while an assigned team's performance is strongly correlated with its networks of expressive and instrumental ties, only the strongest ties in both networks have an effect on performance. Both networks of strong ties explain more of the variance than other factors, such as measured or self-evaluated technical competencies, or the personalities of the team members. In fact, the inclusion of the network of strong ties renders these factors non-significant in the statistical analysis. Our results have consequences for the organization of teams of scientists, engineers, and other knowledge workers tackling today's most complex problems.

  6. The strength of the strongest ties in collaborative problem solving.

    PubMed

    de Montjoye, Yves-Alexandre; Stopczynski, Arkadiusz; Shmueli, Erez; Pentland, Alex; Lehmann, Sune

    2014-06-20

    Complex problem solving in science, engineering, and business has become a highly collaborative endeavor. Teams of scientists or engineers collaborate on projects using their social networks to gather new ideas and feedback. Here we bridge the literature on team performance and information networks by studying teams' problem solving abilities as a function of both their within-team networks and their members' extended networks. We show that, while an assigned team's performance is strongly correlated with its networks of expressive and instrumental ties, only the strongest ties in both networks have an effect on performance. Both networks of strong ties explain more of the variance than other factors, such as measured or self-evaluated technical competencies, or the personalities of the team members. In fact, the inclusion of the network of strong ties renders these factors non-significant in the statistical analysis. Our results have consequences for the organization of teams of scientists, engineers, and other knowledge workers tackling today's most complex problems.

  7. The Proposal of the Model for Developing Dispatch System for Nationwide One-Day Integrative Planning

    NASA Astrophysics Data System (ADS)

    Kim, Hyun Soo; Choi, Hyung Rim; Park, Byung Kwon; Jung, Jae Un; Lee, Jin Wook

    The problems of dispatch planning for container truck are classified as the pickup and delivery problems, which are highly complex issues that consider various constraints in the real world. However, in case of the current situation, it is developed by the control system so that it requires the automated planning system under the view of nationwide integrative planning. Therefore, the purpose of this study is to suggest model to develop the automated dispatch system through the constraint satisfaction problem and meta-heuristic technique-based algorithm. In the further study, the practical system is developed and evaluation is performed in aspect of various results. This study suggests model to undergo the study which promoted the complexity of the problems by considering the various constraints which were not considered in the early study. However, it is suggested that it is necessary to add the study which includes the real-time monitoring function for vehicles and cargos based on the information technology.

  8. A Case Study of Middle School Teachers' Preparations for High-Stakes Assessments

    ERIC Educational Resources Information Center

    Yeary, David Lee

    2017-01-01

    Students, educators, and schools across the country have been presented with challenges as a result of rigorous standards and high-complexity tests. The problem addressed in this case study was that teachers in a rural middle school in a southeastern state were preparing students to take a new high-stakes state-mandated assessment in English…

  9. Toward Modeling the Intrinsic Complexity of Test Problems

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi

    2017-01-01

    The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…

  10. Investigating the Effect of Complexity Factors in Stoichiometry Problems Using Logistic Regression and Eye Tracking

    ERIC Educational Resources Information Center

    Tang, Hui; Kirk, John; Pienta, Norbert J.

    2014-01-01

    This paper includes two experiments, one investigating complexity factors in stoichiometry word problems, and the other identifying students' problem-solving protocols by using eye-tracking technology. The word problems used in this study had five different complexity factors, which were randomly assigned by a Web-based tool that we developed. The…

  11. Continuing Education as a National Capital Investment.

    ERIC Educational Resources Information Center

    Striner, Herbert E.

    The constant readjustment that is necessary in a socially and economically complex society is discussed. The point is made that in recent years the United States has been confronted by an increasingly urgent series of economic problems. Intractably high levels of unemployment have accompanied abnormally high levels of inflation. It is also pointed…

  12. Complex optimization for big computational and experimental neutron datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  13. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  14. Multigrid Methods for Aerodynamic Problems in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Caughey, David A.

    1995-01-01

    Work has been directed at the development of efficient multigrid methods for the solution of aerodynamic problems involving complex geometries, including the development of computational methods for the solution of both inviscid and viscous transonic flow problems. The emphasis is on problems of complex, three-dimensional geometry. The methods developed are based upon finite-volume approximations to both the Euler and the Reynolds-Averaged Navier-Stokes equations. The methods are developed for use on multi-block grids using diagonalized implicit multigrid methods to achieve computational efficiency. The work is focused upon aerodynamic problems involving complex geometries, including advanced engine inlets.

  15. POWERING AIRPOWER: IS THE AIR FORCES ENERGY SECURE

    DTIC Science & Technology

    2016-02-01

    needs. More on-site renewable energy generation increases AF readiness in crisis times by minimizing the AF’s dependency on fossil fuels. Financing...reducing the need for traditional fossil fuels, and the high investment cost of onsite renewable energy sources is still a serious roadblock in this...help installations better plan holistically. This research will take the form of problem/solution framework. With any complex problem, rarely does a

  16. Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex and Dynamic Conditions

    DTIC Science & Technology

    2015-07-14

    AFRL-OSR-VA-TR-2015-0202 Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex...Computational Modeling of Team Problem Solving for Decision Making Under Complex and Dynamic Conditions 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...functioning as they solve complex problems, and propose the means to improve the performance of teams, under changing or adversarial conditions. By

  17. Assessing problem-solving skills in construction education with the virtual construction simulator

    NASA Astrophysics Data System (ADS)

    Castronovo, Fadi

    The ability to solve complex problems is an essential skill that a construction and project manager must possess when entering the architectural, engineering, and construction industry. Such ability requires a mixture of problem-solving skills, ranging from lower to higher order thinking skills, composed of cognitive and metacognitive processes. These skills include the ability to develop and evaluate construction plans and manage the execution of such plans. However, in a typical construction program, introducing students to such complex problems can be a challenge, and most commonly the learner is presented with only part of a complex problem. To support this challenge, the traditional methodology of delivering design, engineering, and construction instruction has been going through a technological revolution, due to the rise of computer-based technology. For example, in construction classrooms, and other disciplines, simulations and educational games are being utilized to support the development of problem-solving skills. Previous engineering education research has illustrated the high potential that simulations and educational games have in engaging in lower and higher order thinking skills. Such research illustrated their capacity to support the development of problem-solving skills. This research presents evidence supporting the theory that educational simulation games can help with the learning and retention of transferable problem-solving skills, which are necessary to solve complex construction problems. The educational simulation game employed in this study is the Virtual Construction Simulator (VCS). The VCS is a game developed to provide students in an engaging learning activity that simulates the planning and managing phases of a construction project. Assessment of the third iteration of the VCS(3) game has shown pedagogical value in promoting students' motivation and a basic understanding of construction concepts. To further evaluate the benefits on problem-solving skills, a new version of the VCS(4) was developed, with new building modules and assessment framework. The design and development of the VCS4 leveraged research in educational psychology, multimedia learning, human-computer interaction, and Building Information Modeling. In this dissertation the researcher aimed to evaluate the pedagogical value of the VCS4 in fostering problem-solving skills. To answer the research questions, a crossover repeated measures quasi-experiment was designed to assess the educational gains that the VCS can provide to construction education. A group of 34 students, attending a fourth-year construction course at a university in the United States was chosen to participate in the experiment. The three learning modules of the VCS were used, which challenged the students to plan and manage the construction process of a wooden pavilion, the steel erection of a dormitory, and the concrete placement of the same dormitory. Based on the results the researcher was able to provide evidence supporting the hypothesis that the chosen sample of construction students were able to gain and retain problem-solving skills necessary to solve complex construction simulation problems, no matter what the sequence with which these modules were played. In conclusion, the presented results provide evidence supporting the theory that educational simulation games can help the learning and retention of transferable problem-solving skills, which are necessary to solve complex construction problems.

  18. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  19. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  20. Atwood's machine as a tool to introduce variable mass systems

    NASA Astrophysics Data System (ADS)

    de Sousa, Célia A.

    2012-03-01

    This article discusses an instructional strategy which explores eventual similarities and/or analogies between familiar problems and more sophisticated systems. In this context, the Atwood's machine problem is used to introduce students to more complex problems involving ropes and chains. The methodology proposed helps students to develop the ability needed to apply relevant concepts in situations not previously encountered. The pedagogical advantages are relevant for both secondary and high school students, showing that, through adequate examples, the question of the validity of Newton's second law may even be introduced to introductory level students.

  1. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  2. Typification and taxonomic status re-evaluation of 15 taxon names within the species complex Cymbella affinis/tumidula/turgidula (Cymbellaceae, Bacillariophyta)

    PubMed Central

    da Silva, Weliton José; Jahn, Regine; Ludwig, Thelma Alvim Veiga; Hinz, Friedel; Menezes, Mariângela

    2015-01-01

    Abstract Specimens belonging to the Cymbella affinis / Cymbella tumidula / Cymbella turgidula species complex have many taxonomic problems, due to their high morphological variability and lack of type designations. Fifteen taxon names of this complex, distributed in five species, were re-evaluated concerning their taxonomic status, and lectotypified based on original material. In addition to light microscopy, some material was analyzed by electron microscopy. Four new combinations are proposed in order to reposition infraspecific taxa. PMID:26312038

  3. G.A.M.E.: GPU-accelerated mixture elucidator.

    PubMed

    Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J

    2017-09-15

    GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .

  4. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  5. Structural qualia: a solution to the hard problem of consciousness.

    PubMed

    Loorits, Kristjan

    2014-01-01

    The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has) something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved.

  6. Structural qualia: a solution to the hard problem of consciousness

    PubMed Central

    Loorits, Kristjan

    2014-01-01

    The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has) something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved. PMID:24672510

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Thomas W.; Quach, Tu-Thach; Detry, Richard Joseph

    Complex Adaptive Systems of Systems, or CASoS, are vastly complex ecological, sociological, economic and/or technical systems which we must understand to design a secure future for the nation and the world. Perturbations/disruptions in CASoS have the potential for far-reaching effects due to pervasive interdependencies and attendant vulnerabilities to cascades in associated systems. Phoenix was initiated to address this high-impact problem space as engineers. Our overarching goals are maximizing security, maximizing health, and minimizing risk. We design interventions, or problem solutions, that influence CASoS to achieve specific aspirations. Through application to real-world problems, Phoenix is evolving the principles and discipline ofmore » CASoS Engineering while growing a community of practice and the CASoS engineers to populate it. Both grounded in reality and working to extend our understanding and control of that reality, Phoenix is at the same time a solution within a CASoS and a CASoS itself.« less

  8. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  9. Students' conceptual performance on synthesis physics problems with varying mathematical complexity

    NASA Astrophysics Data System (ADS)

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-06-01

    A body of research on physics problem solving has focused on single-concept problems. In this study we use "synthesis problems" that involve multiple concepts typically taught in different chapters. We use two types of synthesis problems, sequential and simultaneous synthesis tasks. Sequential problems require a consecutive application of fundamental principles, and simultaneous problems require a concurrent application of pertinent concepts. We explore students' conceptual performance when they solve quantitative synthesis problems with varying mathematical complexity. Conceptual performance refers to the identification, follow-up, and correct application of the pertinent concepts. Mathematical complexity is determined by the type and the number of equations to be manipulated concurrently due to the number of unknowns in each equation. Data were collected from written tasks and individual interviews administered to physics major students (N =179 ) enrolled in a second year mechanics course. The results indicate that mathematical complexity does not impact students' conceptual performance on the sequential tasks. In contrast, for the simultaneous problems, mathematical complexity negatively influences the students' conceptual performance. This difference may be explained by the students' familiarity with and confidence in particular concepts coupled with cognitive load associated with manipulating complex quantitative equations. Another explanation pertains to the type of synthesis problems, either sequential or simultaneous task. The students split the situation presented in the sequential synthesis tasks into segments but treated the situation in the simultaneous synthesis tasks as a single event.

  10. Effect of rich-club on diffusion in complex networks

    NASA Astrophysics Data System (ADS)

    Berahmand, Kamal; Samadi, Negin; Sheikholeslami, Seyed Mahmood

    2018-05-01

    One of the main issues in complex networks is the phenomenon of diffusion in which the goal is to find the nodes with the highest diffusing power. In diffusion, there is always a conflict between accuracy and efficiency time complexity; therefore, most of the recent studies have focused on finding new centralities to solve this problem and have offered new ones, but our approach is different. Using one of the complex networks’ features, namely the “rich-club”, its effect on diffusion in complex networks has been analyzed and it is demonstrated that in datasets which have a high rich-club, it is better to use the degree centrality for finding influential nodes because it has a linear time complexity and uses the local information; however, this rule does not apply to datasets which have a low rich-club. Next, real and artificial datasets with the high rich-club have been used in which degree centrality has been compared to famous centrality using the SIR standard.

  11. [Adjustment of the German DRG system in 2009].

    PubMed

    Wenke, A; Franz, D; Pühse, G; Volkmer, B; Roeder, N

    2009-07-01

    The 2009 version of the German DRG system brought significant changes for urology concerning coding of diagnoses, medical procedures and the DRG structure. In view of the political situation and considerable economic pressure, a critical analysis of the 2009 German DRG system is warranted. Analysis of relevant diagnoses, medical procedures and G-DRGs in the versions 2008 and 2009 based on the publications of the German DRG-institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). The relevant diagnoses, medical procedures and German DRGs in the versions 2008 and 2009 were analysed based on the publications of the German DRG Institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). Changes for 2009 focus on the development of the DRG structure, DRG validation and codes for medical procedures to be used for very complex cases. The outcome of these changes for German hospitals may vary depending in the range of activities. The German DRG system again gained complexity. High demands are made on correct and complete coding of complex urology cases. The quality of case allocation in the German DRG system was improved. On the one hand some of the old problems (e.g. enterostomata) still persist, while on the other hand new problems evolved out of the attempt to improve the case allocation of highly complex and expensive cases. Time will tell whether the increase in highly specialized DRG with low case numbers will continue to endure and reach acceptable rates of annual fluctuations.

  12. Aerodynamics of an airfoil with a jet issuing from its surface

    NASA Technical Reports Server (NTRS)

    Tavella, D. A.; Karamcheti, K.

    1982-01-01

    A simple, two dimensional, incompressible and inviscid model for the problem posed by a two dimensional wing with a jet issuing from its lower surface is considered and a parametric analysis is carried out to observe how the aerodynamic characteristics depend on the different parameters. The mathematical problem constitutes a boundary value problem where the position of part of the boundary is not known a priori. A nonlinear optimization approach was used to solve the problem, and the analysis reveals interesting characteristics that may help to better understand the physics involved in more complex situations in connection with high lift systems.

  13. Formative feedback and scaffolding for developing complex problem solving and modelling outcomes

    NASA Astrophysics Data System (ADS)

    Frank, Brian; Simper, Natalie; Kaupp, James

    2018-07-01

    This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.

  14. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert

    PubMed Central

    Schmidt, Henk G.; Rikers, Remy M. J. P.; Custers, Eugene J. F. M.; Splinter, Ted A. W.; van Saase, Jan L. C. M.

    2010-01-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices’ decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases. PMID:20354726

  15. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert.

    PubMed

    Mamede, Sílvia; Schmidt, Henk G; Rikers, Remy M J P; Custers, Eugene J F M; Splinter, Ted A W; van Saase, Jan L C M

    2010-11-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices' decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases.

  16. Predicting High School Completion Using Student Performance in High School Algebra: A Mixed Methods Research Study

    ERIC Educational Resources Information Center

    Chiado, Wendy S.

    2012-01-01

    Too many of our nation's youth have failed to complete high school. Determining why so many of our nation's students fail to graduate is a complex, multi-faceted problem and beyond the scope of any one study. The study presented herein utilized a thirteen-step mixed methods model developed by Leech and Onwuegbuzie (2007) to demonstrate within a…

  17. Physical Complexity and Cognitive Evolution

    NASA Astrophysics Data System (ADS)

    Jedlicka, Peter

    Our intuition tells us that there is a general trend in the evolution of nature, a trend towards greater complexity. However, there are several definitions of complexity and hence it is difficult to argue for or against the validity of this intuition. Christoph Adami has recently introduced a novel measure called physical complexity that assigns low complexity to both ordered and random systems and high complexity to those in between. Physical complexity measures the amount of information that an organism stores in its genome about the environment in which it evolves. The theory of physical complexity predicts that evolution increases the amount of `knowledge' an organism accumulates about its niche. It might be fruitful to generalize Adami's concept of complexity to the entire evolution (including the evolution of man). Physical complexity fits nicely into the philosophical framework of cognitive biology which considers biological evolution as a progressing process of accumulation of knowledge (as a gradual increase of epistemic complexity). According to this paradigm, evolution is a cognitive `ratchet' that pushes the organisms unidirectionally towards higher complexity. Dynamic environment continually creates problems to be solved. To survive in the environment means to solve the problem, and the solution is an embodied knowledge. Cognitive biology (as well as the theory of physical complexity) uses the concepts of information and entropy and views the evolution from both the information-theoretical and thermodynamical perspective. Concerning humans as conscious beings, it seems necessary to postulate an emergence of a new kind of knowledge - a self-aware and self-referential knowledge. Appearence of selfreflection in evolution indicates that the human brain reached a new qualitative level in the epistemic complexity.

  18. Preparing new nurses with complexity science and problem-based learning.

    PubMed

    Hodges, Helen F

    2011-01-01

    Successful nurses function effectively with adaptability, improvability, and interconnectedness, and can see emerging and unpredictable complex problems. Preparing new nurses for complexity requires a significant change in prevalent but dated nursing education models for rising graduates. The science of complexity coupled with problem-based learning and peer review contributes a feasible framework for a constructivist learning environment to examine real-time systems data; explore uncertainty, inherent patterns, and ambiguity; and develop skills for unstructured problem solving. This article describes a pilot study of a problem-based learning strategy guided by principles of complexity science in a community clinical nursing course. Thirty-five senior nursing students participated during a 3-year period. Assessments included peer review, a final project paper, reflection, and a satisfaction survey. Results were higher than expected levels of student satisfaction, increased breadth and analysis of complex data, acknowledgment of community as complex adaptive systems, and overall higher level thinking skills than in previous years. 2011, SLACK Incorporated.

  19. Unsteady, one-dimensional gas dynamics computations using a TVD type sequential solver

    NASA Technical Reports Server (NTRS)

    Thakur, Siddharth; Shyy, Wei

    1992-01-01

    The efficacy of high resolution convection schemes to resolve sharp gradient in unsteady, 1D flows is examined using the TVD concept based on a sequential solution algorithm. Two unsteady flow problems are considered which include the problem involving the interaction of the various waves in a shock tube with closed reflecting ends and the problem involving the unsteady gas dynamics in a tube with closed ends subject to an initial pressure perturbation. It is concluded that high accuracy convection schemes in a sequential solution framework are capable of resolving discontinuities in unsteady flows involving complex gas dynamics. However, a sufficient amount of dissipation is required to suppress oscillations near discontinuities in the sequential approach, which leads to smearing of the solution profiles.

  20. Designing Adaptive Low Dissipative High Order Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.; Parks, John W. (Technical Monitor)

    2002-01-01

    Proper control of the numerical dissipation/filter to accurately resolve all relevant multiscales of complex flow problems while still maintaining nonlinear stability and efficiency for long-time numerical integrations poses a great challenge to the design of numerical methods. The required type and amount of numerical dissipation/filter are not only physical problem dependent, but also vary from one flow region to another. This is particularly true for unsteady high-speed shock/shear/boundary-layer/turbulence/acoustics interactions and/or combustion problems since the dynamics of the nonlinear effect of these flows are not well-understood. Even with extensive grid refinement, it is of paramount importance to have proper control on the type and amount of numerical dissipation/filter in regions where it is needed.

  1. The problems and perspectives for the introduction of high-rise construction in Russian cities

    NASA Astrophysics Data System (ADS)

    Pershina, Anna; Radzhabov, Mehman; Dormidontova, Tatyana

    2018-03-01

    The propose of academic affairs is discovery the principal areas of concern high-rise construction in Russia. Examples of modern Russian and foreign high-rise construction are considered in the work. The most important problems and their solutions for Russia are identified on their basis. The everyone area of concern is considered separately. Ecology problems and influence of high-rise construction for the healthy and psychological effect of people are considered special. High-rise constructions influence negative and positive for urban environment in Moscow and Samara cities. The experience lack, defects in requirements document, which don't include all high-rise constructions specific, system problem of construction and often non-availability of proper control at the existing requirements document result for complexity of designing, construction and operation. At this moment, high-rise constructions temp is increasing in Moscow. Feasibility of high-rise buildings come up in regions of Russia. The reasons include high material inputs, irregularities of requirements network and utility lines and maintenance problems. The researching follow up of conclusions and recommendations for high-rise constructions development in Russia. The reasons of high-rise buildings are urbanization of people and necessary of concentration labor supply. The important tasks for organization are creating compact urban environment, decrease urban area for development, using an innovative technology for construction and properly maintenance. The balance between the preference of high-rise construction, inputs for construction and influence for ecology are resolve for this task.

  2. Effect of Culture on High-School Students' Question-Asking Ability Resulting from an Inquiry-Oriented Chemistry Laboratory

    ERIC Educational Resources Information Center

    Dkeidek, Iyad; Mamlok-Naaman, Rachel; Hofstein, Avi

    2011-01-01

    In order to cope with complex issues in the science-technology-environment-society context, one must develop students' high-order learning skills, such as question-asking ability (QAA), critical thinking, evaluative thinking, decision-making, and problem-solving capabilities within science education. In this study, we are concerned with evaluating…

  3. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  4. Addressing Complex Challenges through Adaptive Leadership: A Promising Approach to Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Nelson, Tenneisha; Squires, Vicki

    2017-01-01

    Organizations are faced with solving increasingly complex problems. Addressing these issues requires effective leadership that can facilitate a collaborative problem solving approach where multiple perspectives are leveraged. In this conceptual paper, we critique the effectiveness of earlier leadership models in tackling complex organizational…

  5. Enabling Requirements-Based Programming for Highly-Dependable Complex Parallel and Distributed Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.

  6. A visual programming environment for the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David

    1988-01-01

    The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.

  7. The paradox of physicians and administrators in health care organizations.

    PubMed

    Peirce, J C

    2000-01-01

    Rapidly changing times in health care challenge both physicians and health care administrators to manage the paradox of providing orderly, high quality, and efficient care while bringing forth innovations to address present unmet problems and surprises that emerge. Health care has grown throughout the past several centuries through differentiation and integration, becoming a highly complex biological system with the hospital as the central attractive force--or "strange attractor"--during this century. The theoretical model of complex adaptive systems promises more effective strategic direction in addressing these chaotic times where the new strange attractor moves beyond the hospital.

  8. Reaching out to take on TB in Somalia.

    PubMed

    Moore, David A J; Granat, Simo M

    2014-01-01

    Among the many challenges facing populations disrupted by complex emergencies, personal security and food security rank much higher than access to healthcare. However, over time health needs assume increasing importance. Many complex crises occur in settings where the background incidence of TB is already high; social and economic conditions in crises are then highly conducive to amplification of the existing TB problem. Innovative approaches to delivery of diagnostic and treatment services, transition planning and integration with other healthcare providers and services are vital. In the extremely challenging environment of Somalia, multiple partners are making headway though collaboration and innovation.

  9. Analysis of Complexity Evolution Management and Human Performance Issues in Commercial Aircraft Automation Systems

    NASA Technical Reports Server (NTRS)

    Vakil, Sanjay S.; Hansman, R. John

    2000-01-01

    Autoflight systems in the current generation of aircraft have been implicated in several recent incidents and accidents. A contributory aspect to these incidents may be the manner in which aircraft transition between differing behaviours or 'modes.' The current state of aircraft automation was investigated and the incremental development of the autoflight system was tracked through a set of aircraft to gain insight into how these systems developed. This process appears to have resulted in a system without a consistent global representation. In order to evaluate and examine autoflight systems, a 'Hybrid Automation Representation' (HAR) was developed. This representation was used to examine several specific problems known to exist in aircraft systems. Cyclomatic complexity is an analysis tool from computer science which counts the number of linearly independent paths through a program graph. This approach was extended to examine autoflight mode transitions modelled with the HAR. A survey was conducted of pilots to identify those autoflight mode transitions which airline pilots find difficult. The transitions identified in this survey were analyzed using cyclomatic complexity to gain insight into the apparent complexity of the autoflight system from the perspective of the pilot. Mode transitions which had been identified as complex by pilots were found to have a high cyclomatic complexity. Further examination was made into a set of specific problems identified in aircraft: the lack of a consistent representation of automation, concern regarding appropriate feedback from the automation, and the implications of physical limitations on the autoflight systems. Mode transitions involved in changing to and leveling at a new altitude were identified across multiple aircraft by numerous pilots. Where possible, evaluation and verification of the behaviour of these autoflight mode transitions was investigated via aircraft-specific high fidelity simulators. Three solution approaches to concerns regarding autoflight systems, and mode transitions in particular, are presented in this thesis. The first is to use training to modify pilot behaviours, or procedures to work around known problems. The second approach is to mitigate problems by enhancing feedback. The third approach is to modify the process by which automation is designed. The Operator Directed Process forces the consideration and creation of an automation model early in the design process for use as the basis of the software specification and training.

  10. A problem-solving approach to effective insulin injection for patients at either end of the body mass index.

    PubMed

    Juip, Micki; Fitzner, Karen

    2012-06-01

    People with diabetes require skills and knowledge to adhere to medication regimens and self-manage this complex disease. Effective self-management is contingent upon effective problem solving and decision making. Gaps existed regarding useful approaches to problem solving by individuals with very low and very high body mass index (BMI) who self-administer insulin injections. This article addresses those gaps by presenting findings from a patient survey, a symposium on the topic of problem solving, and recent interviews with diabetes educators to facilitate problem-solving approaches for people with diabetes with high and low BMI who inject insulin and/or other medications. In practice, problem solving involves problem identification, definition, and specification; goal and barrier identification are a prelude to generating a set of potential strategies for problem resolution and applying these strategies to implement a solution. Teaching techniques, such as site rotation and ensuring that people with diabetes use the appropriate equipment, increase confidence with medication adherence. Medication taking is more effective when people with diabetes are equipped with the knowledge, skills, and problem-solving behaviors to effectively self-manage their injections.

  11. Explicit solution techniques for impact with contact constraints

    NASA Technical Reports Server (NTRS)

    Mccarty, Robert E.

    1993-01-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  12. Explicit solution techniques for impact with contact constraints

    NASA Astrophysics Data System (ADS)

    McCarty, Robert E.

    1993-08-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  13. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  14. The Bright Side of Being Blue: Depression as an Adaptation for Analyzing Complex Problems

    ERIC Educational Resources Information Center

    Andrews, Paul W.; Thomson, J. Anderson, Jr.

    2009-01-01

    Depression is the primary emotional condition for which help is sought. Depressed people often report persistent rumination, which involves analysis, and complex social problems in their lives. Analysis is often a useful approach for solving complex problems, but it requires slow, sustained processing, so disruption would interfere with problem…

  15. Sequencing the Connectome

    PubMed Central

    Zador, Anthony M.; Dubnau, Joshua; Oyibo, Hassana K.; Zhan, Huiqing; Cao, Gang; Peikon, Ian D.

    2012-01-01

    Connectivity determines the function of neural circuits. Historically, circuit mapping has usually been viewed as a problem of microscopy, but no current method can achieve high-throughput mapping of entire circuits with single neuron precision. Here we describe a novel approach to determining connectivity. We propose BOINC (“barcoding of individual neuronal connections”), a method for converting the problem of connectivity into a form that can be read out by high-throughput DNA sequencing. The appeal of using sequencing is that its scale—sequencing billions of nucleotides per day is now routine—is a natural match to the complexity of neural circuits. An inexpensive high-throughput technique for establishing circuit connectivity at single neuron resolution could transform neuroscience research. PMID:23109909

  16. Implicit Geometry Meshing for the simulation of Rotary Friction Welding

    NASA Astrophysics Data System (ADS)

    Schmicker, D.; Persson, P.-O.; Strackeljan, J.

    2014-08-01

    The simulation of Rotary Friction Welding (RFW) is a challenging task, since it states a coupled problem of phenomena like large plastic deformations, heat flux, contact and friction. In particular the mesh generation and its restoration when using a Lagrangian description of motion is of significant severity. In this regard Implicit Geometry Meshing (IGM) algorithms are promising alternatives to the more conventional explicit methods. Because of the implicit description of the geometry during remeshing, the IGM procedure turns out to be highly robust and generates spatial discretizations of high quality regardless of the complexity of the flash shape and its inclusions. A model for efficient RFW simulation is presented, which is based on a Carreau fluid law, an Augmented Lagrange approach in mapping the incompressible deformations, a penalty contact approach, a fully regularized Coulomb-/fluid friction law and a hybrid time integration strategy. The implementation of the IGM algorithm using 6-node triangular finite elements is described in detail. The techniques are demonstrated on a fairly complex friction welding problem, demonstrating the performance and the potentials of the proposed method. The techniques are general and straight-forward to implement, and offer the potential of successful adoption to a wide range of other engineering problems.

  17. Challenges in Biomarker Discovery: Combining Expert Insights with Statistical Analysis of Complex Omics Data

    PubMed Central

    McDermott, Jason E.; Wang, Jing; Mitchell, Hugh; Webb-Robertson, Bobbie-Jo; Hafen, Ryan; Ramey, John; Rodland, Karin D.

    2012-01-01

    Introduction The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful molecular signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities for more sophisticated approaches to integrating purely statistical and expert knowledge-based approaches. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges that have been encountered in deriving valid and useful signatures of disease. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to identify predictive signatures of disease are key to future success in the biomarker field. We will describe our recommendations for possible approaches to this problem including metrics for the evaluation of biomarkers. PMID:23335946

  18. Challenges in Biomarker Discovery: Combining Expert Insights with Statistical Analysis of Complex Omics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Wang, Jing; Mitchell, Hugh D.

    2013-01-01

    The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities both for purely statistical and expert knowledge-based approaches and would benefit from improved integration of the two. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges thatmore » have been encountered. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to biomarker discovery and characterization are key to future success in the biomarker field. We will describe our recommendations of possible approaches to this problem including metrics for the evaluation of biomarkers.« less

  19. Analysis of complex-type chromosome exchanges in astronauts' lymphocytes after space flight as a biomarker of high-LET exposure

    NASA Technical Reports Server (NTRS)

    George, Kerry; Wu, Honglu; Willingham, Veronica; Cucinotta, Francis A.

    2002-01-01

    High-LET radiation is more efficient in producing complex-type chromosome exchanges than sparsely ionizing radiation, and this can potentially be used as a biomarker of radiation quality. To investigate if complex chromosome exchanges are induced by the high-LET component of space radiation exposure, damage was assessed in astronauts' blood lymphocytes before and after long duration missions of 3-4 months. The frequency of simple translocations increased significantly for most of the crewmembers studied. However, there were few complex exchanges detected and only one crewmember had a significant increase after flight. It has been suggested that the yield of complex chromosome damage could be underestimated when analyzing metaphase cells collected at one time point after irradiation, and analysis of chemically-induced PCC may be more accurate since problems with complicated cell-cycle delays are avoided. However, in this case the yields of chromosome damage were similar for metaphase and PCC analysis of astronauts' lymphocytes. It appears that the use of complex-type exchanges as biomarker of radiation quality in vivo after low-dose chronic exposure in mixed radiation fields is hampered by statistical uncertainties.

  20. High-Order Methods for Incompressible Fluid Flow

    NASA Astrophysics Data System (ADS)

    Deville, M. O.; Fischer, P. F.; Mund, E. H.

    2002-08-01

    High-order numerical methods provide an efficient approach to simulating many physical problems. This book considers the range of mathematical, engineering, and computer science topics that form the foundation of high-order numerical methods for the simulation of incompressible fluid flows in complex domains. Introductory chapters present high-order spatial and temporal discretizations for one-dimensional problems. These are extended to multiple space dimensions with a detailed discussion of tensor-product forms, multi-domain methods, and preconditioners for iterative solution techniques. Numerous discretizations of the steady and unsteady Stokes and Navier-Stokes equations are presented, with particular sttention given to enforcement of imcompressibility. Advanced discretizations. implementation issues, and parallel and vector performance are considered in the closing sections. Numerous examples are provided throughout to illustrate the capabilities of high-order methods in actual applications.

  1. Lee-Yang zero analysis for the study of QCD phase structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ejiri, Shinji

    2006-03-01

    We comment on the Lee-Yang zero analysis for the study of the phase structure of QCD at high temperature and baryon number density by Monte-Carlo simulations. We find that the sign problem for nonzero density QCD induces a serious problem in the finite volume scaling analysis of the Lee-Yang zeros for the investigation of the order of the phase transition. If the sign problem occurs at large volume, the Lee-Yang zeros will always approach the real axis of the complex parameter plane in the thermodynamic limit. This implies that a scaling behavior which would suggest a crossover transition will notmore » be obtained. To clarify this problem, we discuss the Lee-Yang zero analysis for SU(3) pure gauge theory as a simple example without the sign problem, and then consider the case of nonzero density QCD. It is suggested that the distribution of the Lee-Yang zeros in the complex parameter space obtained by each simulation could be more important information for the investigation of the critical endpoint in the (T,{mu}{sub q}) plane than the finite volume scaling behavior.« less

  2. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  3. Understanding Wicked Problems: A Key to Advancing Environmental Health Promotion

    ERIC Educational Resources Information Center

    Kreuter, Marshall W.; De Rosa, Christopher; Howze, Elizabeth H.; Baldwin, Grant T.

    2004-01-01

    Complex environmental health problems--like air and water pollution, hazardous waste sites, and lead poisoning--are in reality a constellation of linked problems embedded in the fabric of the communities in which they occur. These kinds of complex problems have been characterized by some as "wicked problems" wherein stakeholders may have…

  4. Evolutionary fuzzy modeling human diagnostic decisions.

    PubMed

    Peña-Reyes, Carlos Andrés

    2004-05-01

    Fuzzy CoCo is a methodology, combining fuzzy logic and evolutionary computation, for constructing systems able to accurately predict the outcome of a human decision-making process, while providing an understandable explanation of the underlying reasoning. Fuzzy logic provides a formal framework for constructing systems exhibiting both good numeric performance (accuracy) and linguistic representation (interpretability). However, fuzzy modeling--meaning the construction of fuzzy systems--is an arduous task, demanding the identification of many parameters. To solve it, we use evolutionary computation techniques (specifically cooperative coevolution), which are widely used to search for adequate solutions in complex spaces. We have successfully applied the algorithm to model the decision processes involved in two breast cancer diagnostic problems, the WBCD problem and the Catalonia mammography interpretation problem, obtaining systems both of high performance and high interpretability. For the Catalonia problem, an evolved system was embedded within a Web-based tool-called COBRA-for aiding radiologists in mammography interpretation.

  5. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  6. The Efficacy and Development of Students' Problem-Solving Strategies During Compulsory Schooling: Logfile Analyses

    PubMed Central

    Molnár, Gyöngyvér; Csapó, Benő

    2018-01-01

    The purpose of this study was to examine the role of exploration strategies students used in the first phase of problem solving. The sample for the study was drawn from 3rd- to 12th-grade students (aged 9–18) in Hungarian schools (n = 4,371). Problems designed in the MicroDYN approach with different levels of complexity were administered to the students via the eDia online platform. Logfile analyses were performed to ascertain the impact of strategy use on the efficacy of problem solving. Students' exploration behavior was coded and clustered through Latent Class Analyses. Several theoretically effective strategies were identified, including the vary-one-thing-at-a-time (VOTAT) strategy and its sub-strategies. The results of the analyses indicate that the use of a theoretically effective strategy, which extract all information required to solve the problem, did not always lead to high performance. Conscious VOTAT strategy users proved to be the best problem solvers followed by non-conscious VOTAT strategy users and non-VOTAT strategy users. In the primary school sub-sample, six qualitatively different strategy class profiles were distinguished. The results shed new light on and provide a new interpretation of previous analyses of the processes involved in complex problem solving. They also highlight the importance of explicit enhancement of problem-solving skills and problem-solving strategies as a tool for knowledge acquisition in new contexts during and beyond school lessons. PMID:29593606

  7. The Efficacy and Development of Students' Problem-Solving Strategies During Compulsory Schooling: Logfile Analyses.

    PubMed

    Molnár, Gyöngyvér; Csapó, Benő

    2018-01-01

    The purpose of this study was to examine the role of exploration strategies students used in the first phase of problem solving. The sample for the study was drawn from 3 rd - to 12 th -grade students (aged 9-18) in Hungarian schools ( n = 4,371). Problems designed in the MicroDYN approach with different levels of complexity were administered to the students via the eDia online platform. Logfile analyses were performed to ascertain the impact of strategy use on the efficacy of problem solving. Students' exploration behavior was coded and clustered through Latent Class Analyses. Several theoretically effective strategies were identified, including the vary-one-thing-at-a-time (VOTAT) strategy and its sub-strategies. The results of the analyses indicate that the use of a theoretically effective strategy, which extract all information required to solve the problem, did not always lead to high performance. Conscious VOTAT strategy users proved to be the best problem solvers followed by non-conscious VOTAT strategy users and non-VOTAT strategy users. In the primary school sub-sample, six qualitatively different strategy class profiles were distinguished. The results shed new light on and provide a new interpretation of previous analyses of the processes involved in complex problem solving. They also highlight the importance of explicit enhancement of problem-solving skills and problem-solving strategies as a tool for knowledge acquisition in new contexts during and beyond school lessons.

  8. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  9. Entropy-based consensus clustering for patient stratification.

    PubMed

    Liu, Hongfu; Zhao, Rui; Fang, Hongsheng; Cheng, Feixiong; Fu, Yun; Liu, Yang-Yu

    2017-09-01

    Patient stratification or disease subtyping is crucial for precision medicine and personalized treatment of complex diseases. The increasing availability of high-throughput molecular data provides a great opportunity for patient stratification. Many clustering methods have been employed to tackle this problem in a purely data-driven manner. Yet, existing methods leveraging high-throughput molecular data often suffers from various limitations, e.g. noise, data heterogeneity, high dimensionality or poor interpretability. Here we introduced an Entropy-based Consensus Clustering (ECC) method that overcomes those limitations all together. Our ECC method employs an entropy-based utility function to fuse many basic partitions to a consensus one that agrees with the basic ones as much as possible. Maximizing the utility function in ECC has a much more meaningful interpretation than any other consensus clustering methods. Moreover, we exactly map the complex utility maximization problem to the classic K -means clustering problem, which can then be efficiently solved with linear time and space complexity. Our ECC method can also naturally integrate multiple molecular data types measured from the same set of subjects, and easily handle missing values without any imputation. We applied ECC to 110 synthetic and 48 real datasets, including 35 cancer gene expression benchmark datasets and 13 cancer types with four molecular data types from The Cancer Genome Atlas. We found that ECC shows superior performance against existing clustering methods. Our results clearly demonstrate the power of ECC in clinically relevant patient stratification. The Matlab package is available at http://scholar.harvard.edu/yyl/ecc . yunfu@ece.neu.edu or yyl@channing.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Supply network configuration—A benchmarking problem

    NASA Astrophysics Data System (ADS)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  11. Dynamic pathway modeling of signal transduction networks: a domain-oriented approach.

    PubMed

    Conzelmann, Holger; Gilles, Ernst-Dieter

    2008-01-01

    Mathematical models of biological processes become more and more important in biology. The aim is a holistic understanding of how processes such as cellular communication, cell division, regulation, homeostasis, or adaptation work, how they are regulated, and how they react to perturbations. The great complexity of most of these processes necessitates the generation of mathematical models in order to address these questions. In this chapter we provide an introduction to basic principles of dynamic modeling and highlight both problems and chances of dynamic modeling in biology. The main focus will be on modeling of s transduction pathways, which requires the application of a special modeling approach. A common pattern, especially in eukaryotic signaling systems, is the formation of multi protein signaling complexes. Even for a small number of interacting proteins the number of distinguishable molecular species can be extremely high. This combinatorial complexity is due to the great number of distinct binding domains of many receptors and scaffold proteins involved in signal transduction. However, these problems can be overcome using a new domain-oriented modeling approach, which makes it possible to handle complex and branched signaling pathways.

  12. Complex trauma and mental health in children and adolescents placed in foster care: findings from the National Child Traumatic Stress Network.

    PubMed

    Greeson, Johanna K P; Briggs, Ernestine C; Kisiel, Cassandra L; Layne, Christopher M; Ake, George S; Ko, Susan J; Gerrity, Ellen T; Steinberg, Alan M; Howard, Michael L; Pynoos, Robert S; Fairbank, John A

    2011-01-01

    Many children in the child welfare system (CWS) have histories of recurrent interpersonal trauma perpetrated by caregivers early in life often referred to as complex trauma. Children in the CWS also experience a diverse range of reactions across multiple areas of functioning that are associated with such exposure. Nevertheless, few CWSs routinely screen for trauma exposure and associated symptoms beyond an initial assessment of the precipitating event. This study examines trauma histories, including complex trauma exposure (physical abuse, sexual abuse, emotional abuse, neglect, domestic violence), posttraumatic stress, and behavioral and emotional problems of 2,251 youth (age 0 to 21; M = 9.5, SD = 4.3) in foster care who were referred to a National Child Traumatic Stress Network site for treatment. High prevalence rates of complex trauma exposure were observed: 70.4% of the sample reported at least two of the traumas that constitute complex trauma; 11.7% of the sample reported all 5 types. Compared to youth with other types of trauma, those with complex trauma histories had significantly higher rates of internalizing problems, posttraumatic stress, and clinical diagnoses, and differed on some demographic variables. Implications for child welfare practice and future research are discussed.

  13. Best Practices In Overset Grid Generation

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Gomez, Reynaldo J., III; Rogers, Stuart E.; Buning, Pieter G.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Grid generation for overset grids on complex geometry can be divided into four main steps: geometry processing, surface grid generation, volume grid generation and domain connectivity. For each of these steps, the procedures currently practiced by experienced users are described. Typical problems encountered are also highlighted and discussed. Most of the guidelines are derived from experience on a variety of problems including space launch and return vehicles, subsonic transports with propulsion and high lift devices, supersonic vehicles, rotorcraft vehicles, and turbomachinery.

  14. Aerodynamic instability: A case history

    NASA Technical Reports Server (NTRS)

    Eisenmann, R. C.

    1985-01-01

    The identification, diagnosis, and final correction of complex machinery malfunctions typically require the correlation of many parameters such as mechanical construction, process influence, maintenance history, and vibration response characteristics. The progression is reviewed of field testing, diagnosis, and final correction of a specific machinery instability problem. The case history presented addresses a unique low frequency instability problem on a high pressure barrel compressor. The malfunction was eventually diagnosed as a fluidic mechanism that manifested as an aerodynamic disturbance to the rotor assembly.

  15. An improved least cost routing approach for WDM optical network without wavelength converters

    NASA Astrophysics Data System (ADS)

    Bonani, Luiz H.; Forghani-elahabad, Majid

    2016-12-01

    Routing and wavelength assignment (RWA) problem has been an attractive problem in optical networks, and consequently several algorithms have been proposed in the literature to solve this problem. The most known techniques for the dynamic routing subproblem are fixed routing, fixed-alternate routing, and adaptive routing methods. The first one leads to a high blocking probability (BP) and the last one includes a high computational complexity and requires immense backing from the control and management protocols. The second one suggests a trade-off between performance and complexity, and hence we consider it to improve in our work. In fact, considering the RWA problem in a wavelength routed optical network with no wavelength converter, an improved technique is proposed for the routing subproblem in order to decrease the BP of the network. Based on fixed-alternate approach, the first k shortest paths (SPs) between each node pair is determined. We then rearrange the SPs according to a newly defined cost for the links and paths. Upon arriving a connection request, the sorted paths are consecutively checked for an available wavelength according to the most-used technique. We implement our proposed algorithm and the least-hop fixed-alternate algorithm to show how the rearrangement of SPs contributes to a lower BP in the network. The numerical results demonstrate the efficiency of our proposed algorithm in comparison with the others, considering different number of available wavelengths.

  16. An Efficient Semi-supervised Learning Approach to Predict SH2 Domain Mediated Interactions.

    PubMed

    Kundu, Kousik; Backofen, Rolf

    2017-01-01

    Src homology 2 (SH2) domain is an important subclass of modular protein domains that plays an indispensable role in several biological processes in eukaryotes. SH2 domains specifically bind to the phosphotyrosine residue of their binding peptides to facilitate various molecular functions. For determining the subtle binding specificities of SH2 domains, it is very important to understand the intriguing mechanisms by which these domains recognize their target peptides in a complex cellular environment. There are several attempts have been made to predict SH2-peptide interactions using high-throughput data. However, these high-throughput data are often affected by a low signal to noise ratio. Furthermore, the prediction methods have several additional shortcomings, such as linearity problem, high computational complexity, etc. Thus, computational identification of SH2-peptide interactions using high-throughput data remains challenging. Here, we propose a machine learning approach based on an efficient semi-supervised learning technique for the prediction of 51 SH2 domain mediated interactions in the human proteome. In our study, we have successfully employed several strategies to tackle the major problems in computational identification of SH2-peptide interactions.

  17. Preparing for Complexity and Wicked Problems through Transformational Learning Approaches

    ERIC Educational Resources Information Center

    Yukawa, Joyce

    2015-01-01

    As the information environment becomes increasingly complex and challenging, Library and Information Studies (LIS) education is called upon to nurture innovative leaders capable of managing complex situations and "wicked problems." While disciplinary expertise remains essential, higher levels of mental complexity and adaptive…

  18. Complexity in Nature and Society: Complexity Management in the Age of Globalization

    NASA Astrophysics Data System (ADS)

    Mainzer, Klaus

    The theory of nonlinear complex systems has become a proven problem-solving approach in the natural sciences from cosmic and quantum systems to cellular organisms and the brain. Even in modern engineering science self-organizing systems are developed to manage complex networks and processes. It is now recognized that many of our ecological, social, economic, and political problems are also of a global, complex, and nonlinear nature. What are the laws of sociodynamics? Is there a socio-engineering of nonlinear problem solving? What can we learn from nonlinear dynamics for complexity management in social, economic, financial and political systems? Is self-organization an acceptable strategy to handle the challenges of complexity in firms, institutions and other organizations? It is a main thesis of the talk that nature and society are basically governed by nonlinear and complex information dynamics. How computational is sociodynamics? What can we hope for social, economic and political problem solving in the age of globalization?.

  19. Characterization of Anaerobic Chemical Processes in Reservoirs: Problem Description and Conceptual Model Formulation.

    DTIC Science & Technology

    1981-04-01

    also found that almost all the Fe in soil solution was complexed with organic mat- ter. The high degree of Fe complexing in soil solution was...range of pH, the potentials were in conformity with the theoretical slope of 0.06. 45. When a soil is submerged, soil solution concentrations of...Ponnanperuma 1972). Low temperatures lead to extensive accumula- tion of organic acids in the soil solution (International Rice Research Institute (IRRI) 1969

  20. Accuracy and Calibration of High Explosive Thermodynamic Equations of State

    NASA Astrophysics Data System (ADS)

    Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack

    2010-10-01

    The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.

  1. Planning and Scheduling for Fleets of Earth Observing Satellites

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.

  2. Adaptivity and smart algorithms for fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1990-01-01

    This paper reviews new approaches in CFD which have the potential for significantly increasing current capabilities of modeling complex flow phenomena and of treating difficult problems in fluid-structure interaction. These approaches are based on the notions of adaptive methods and smart algorithms, which use instantaneous measures of the quality and other features of the numerical flowfields as a basis for making changes in the structure of the computational grid and of algorithms designed to function on the grid. The application of these new techniques to several problem classes are addressed, including problems with moving boundaries, fluid-structure interaction in high-speed turbine flows, flow in domains with receding boundaries, and related problems.

  3. Environmental management problems in India

    NASA Astrophysics Data System (ADS)

    Bowonder, B.

    1986-09-01

    Environmental problems are becoming serious in India because of the interacting effects of increasing population density, industrialization and urbanization, and poor environmental management practices. Unless stringent regulatory measures are taken, environmental systems will be irreversibly degraded. Lack of political commitment, lack of a comprehensive environmental policy, poor environmental awareness, functional fragmentation of the public administration system, poor mass media concern, and prevalence of poverty are some of the major factors responsible for increasing the severity of the problems. Environmental problems in India are highly complex, and management procedures have to be developed to achieve coordination between various functional departments, and for this, political leaders have to be convinced of the need to initiate environmental protection measures.

  4. High-frequency modes in a two-dimensional rectangular room with windows

    NASA Astrophysics Data System (ADS)

    Shabalina, E. D.; Shirgina, N. V.; Shanin, A. V.

    2010-07-01

    We examine a two-dimensional model problem of architectural acoustics on sound propagation in a rectangular room with windows. It is supposed that the walls are ideally flat and hard; the windows absorb all energy that falls upon them. We search for the modes of such a room having minimal attenuation indices, which have the expressed structure of billiard trajectories. The main attenuation mechanism for such modes is diffraction at the edges of the windows. We construct estimates for the attenuation indices of the given modes based on the solution to the Weinstein problem. We formulate diffraction problems similar to the statement of the Weinstein problem that describe the attenuation of billiard modes in complex situations.

  5. Antecedents and behavior-problem outcomes of parental monitoring and psychological control in early adolescence.

    PubMed

    Pettit, G S; Laird, R D; Dodge, K A; Bates, J E; Criss, M M

    2001-01-01

    The early childhood antecedents and behavior-problem correlates of monitoring and psychological control were examined in this prospective, longitudinal, multi-informant study. Parenting data were collected during home visit interviews with 440 mothers and their 13-year-old children. Behavior problems (anxiety/depression and delinquent behavior) were assessed via mother, teacher, and/or adolescent reports at ages 8 through 10 years and again at ages 13 through 14. Home-interview data collected at age 5 years were used to measure antecedent parenting (harsh/reactive, positive/proactive), family background (e.g., socioeconomic status), and mother-rated child behavior problems. Consistent with expectation, monitoring was anteceded by a proactive parenting style and by advantageous family-ecological characteristics, and psychological control was anteceded by harsh parenting and by mothers' earlier reports of child externalizing problems. Consistent with prior research, monitoring was associated with fewer delinquent behavior problems. Links between psychological control and adjustment were more complex: High levels of psychological control were associated with more delinquent problems for girls and for teens who were low in preadolescent delinquent problems, and with more anxiety/depression for girls and for teens who were high in preadolescent anxiety/depression.

  6. Self-Directed Cooperative Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo; Morris, Robert (Technical Monitor)

    2003-01-01

    The project is concerned with the development of decision-theoretic techniques to optimize the scientific return of planetary rovers. Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We have developed a comprehensive solution to this problem that involves high-level tools to describe a mission; a compiler that maps a mission description and additional probabilistic models of the components of the rover into a Markov decision problem; and algorithms for solving the rover control problem that are sensitive to the limited computational resources and high-level of uncertainty in this domain.

  7. A matrix-assisted laser desorption/ionization mass spectroscopy method for the analysis of small molecules by integrating chemical labeling with the supramolecular chemistry of cucurbituril.

    PubMed

    Ding, Jun; Xiao, Hua-Ming; Liu, Simin; Wang, Chang; Liu, Xin; Feng, Yu-Qi

    2018-10-05

    Although several methods have realized the analysis of low molecular weight (LMW) compounds using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) by overcoming the problem of interference with MS signals in the low mass region derived from conventional organic matrices, this emerging field still requires strategies to address the issue of analyzing complex samples containing LMW components in addition to the LMW compounds of interest, and solve the problem of lack of universality. The present study proposes an integrated strategy that combines chemical labeling with the supramolecular chemistry of cucurbit [n]uril (CB [n]) for the MALDI MS analysis of LMW compounds in complex samples. In this strategy, the target LMW compounds are first labeled by introducing a series of bifunctional reagents that selectively react with the target analytes and also form stable inclusion complexes with CB [n]. Then, the labeled products act as guest molecules that readily and selectively form stable inclusion complexes with CB [n]. This strategy relocates the MS signals of the LMW compounds of interest from the low mass region suffering high interference to the high mass region where interference with low mass components is absent. Experimental results demonstrate that a wide range of LMW compounds, including carboxylic acids, aldehydes, amines, thiol, and cis-diols, can be successfully detected using the proposed strategy, and the limits of detection were in the range of 0.01-1.76 nmol/mL. In addition, the high selectivity of the labeling reagents for the target analytes in conjunction with the high selectivity of the binding between the labeled products and CB [n] ensures an absence of signal interference with the non-targeted LMW components of complex samples. Finally, the feasibility of the proposed strategy for complex sample analysis is demonstrated by the accurate and rapid quantitative analysis of aldehydes in saliva and herbal medicines. As such, this work not only provides an alternative method for the detection of various LMW compounds using MALDI MS, but also can be applied to the selective and high-throughput analysis of LMW analytes in complex samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. How Cognitive Style and Problem Complexity Affect Preservice Agricultural Education Teachers' Abilities to Solve Problems in Agricultural Mechanics

    ERIC Educational Resources Information Center

    Blackburn, J. Joey; Robinson, J. Shane; Lamm, Alexa J.

    2014-01-01

    The purpose of this experimental study was to determine the effects of cognitive style and problem complexity on Oklahoma State University preservice agriculture teachers' (N = 56) ability to solve problems in small gasoline engines. Time to solution was operationalized as problem solving ability. Kirton's Adaption-Innovation Inventory was…

  9. Medical image processing using neural networks based on multivalued and universal binary neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.

    1998-06-01

    Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.

  10. On the Complexity of Delaying an Adversary’s Project

    DTIC Science & Technology

    2005-01-01

    interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We

  11. Solving the Inverse-Square Problem with Complex Variables

    ERIC Educational Resources Information Center

    Gauthier, N.

    2005-01-01

    The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…

  12. Making science high impact to inform decision-making: Using boundary objects for aquatic research

    EPA Science Inventory

    The St. Louis River represents a complex natural resource management problem. Current ecosystem management decisions must address extensive sediment remediation and habitat restoration goals for the lower river and associated port, as well as recreational users who value differen...

  13. Engineering and Computing Portal to Solve Environmental Problems

    NASA Astrophysics Data System (ADS)

    Gudov, A. M.; Zavozkin, S. Y.; Sotnikov, I. Y.

    2018-01-01

    This paper describes architecture and services of the Engineering and Computing Portal, which is considered to be a complex solution that provides access to high-performance computing resources, enables to carry out computational experiments, teach parallel technologies and solve computing tasks, including technogenic safety ones.

  14. Dynamic Modeling as a Cognitive Regulation Scaffold for Developing Complex Problem-Solving Skills in an Educational Massively Multiplayer Online Game Environment

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Ge, Xun; Ifenthaler, Dirk; Law, Victor

    2011-01-01

    Following a design-based research framework, this article reports two empirical studies with an educational MMOG, called "McLarin's Adventures," on facilitating 9th-grade students' complex problem-solving skill acquisition in interdisciplinary STEM education. The article discusses the nature of complex and ill-structured problem solving…

  15. You Need to Know: There Is a Causal Relationship between Structural Knowledge and Control Performance in Complex Problem Solving Tasks

    ERIC Educational Resources Information Center

    Goode, Natassia; Beckmann, Jens F.

    2010-01-01

    This study investigates the relationships between structural knowledge, control performance and fluid intelligence in a complex problem solving (CPS) task. 75 participants received either complete, partial or no information regarding the underlying structure of a complex problem solving task, and controlled the task to reach specific goals.…

  16. Pre-Service Teachers' Free and Structured Mathematical Problem Posing

    ERIC Educational Resources Information Center

    Silber, Steven; Cai, Jinfa

    2017-01-01

    This exploratory study examined how pre-service teachers (PSTs) pose mathematical problems for free and structured mathematical problem-posing conditions. It was hypothesized that PSTs would pose more complex mathematical problems under structured posing conditions, with increasing levels of complexity, than PSTs would pose under free posing…

  17. A restricted Steiner tree problem is solved by Geometric Method II

    NASA Astrophysics Data System (ADS)

    Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu

    2013-03-01

    The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.

  18. Collective space of high-rise housing complex

    NASA Astrophysics Data System (ADS)

    Bakaeva, Tatyana

    2018-03-01

    The article considers the problems of support of citizens a comfortable living environment in the conditions of the limited territory of the megalopolis, the typological principles of formation of space-planning structure high-rise residence complexes with public space. The collective space for residents of high-rise housing estates on the example of international experience of design and construction is in detail considered. The collective space and the area of the standard apartment are analysed on comfort classes: a social - complex Pinnacle @ Duxton, a business - Monde Condos and an elite - Hamilton Scotts. Interdependence the area of the standard flat and the total area of housing collective space, in addiction on the comfort level, is revealed. In the conditions of high-density urban development, the collective space allows to form the comfortable environment for accommodation. The recommendations for achievement of integrity and improvement of quality of the city environment are made. The convenient collective space makes a contribution to civil policy, it creates the socializing sense of interaction of residents, coagulates social effect.

  19. The method of selecting an integrated development territory for the high-rise unique constructions

    NASA Astrophysics Data System (ADS)

    Sheina, Svetlana; Shevtsova, Elina; Sukhinin, Alexander; Priss, Elena

    2018-03-01

    On the basis of data provided by the Department of architecture and urban planning of the city of Rostov-on-don, the problem of the choice of the territory for complex development that will be in priority for the construction of high-rise and unique buildings is solved. The objective of the study was the development of a methodology for selection of the area and the implementation of the proposed method on the example of evaluation of four-territories complex development. The developed method along with standard indicators of complex evaluation considers additional indicators that assess the territory from the position of high-rise unique building. The final result of the study is the rankings of the functional priority areas that takes into account the construction of both residential and public and business objects of unique high-rise construction. The use of the developed methodology will allow investors and customers to assess the investment attractiveness of the future unique construction project on the proposed site.

  20. Predictive value of general movements' quality in low-risk infants for minor neurological dysfunction and behavioural problems at preschool age.

    PubMed

    Bennema, Anne N; Schendelaar, Pamela; Seggers, Jorien; Haadsma, Maaike L; Heineman, Maas Jan; Hadders-Algra, Mijna

    2016-03-01

    General movement (GM) assessment is a well-established tool to predict cerebral palsy in high-risk infants. Little is known on the predictive value of GM assessment in low-risk populations. To assess the predictive value of GM quality in early infancy for the development of the clinically relevant form of minor neurological dysfunction (complex MND) and behavioral problems at preschool age. Prospective cohort study. A total of 216 members of the prospective Groningen Assisted Reproductive Techniques (ART) cohort study were included in this study. ART did not affect neurodevelopmental outcome of these relatively low-risk infants born to subfertile parents. GM quality was determined at 2 weeks and 3 months. At 18 months and 4 years, the Hempel neurological examination was used to assess MND. At 4 years, parents completed the Child Behavior Checklist; this resulted in the total problem score (TPS), internalizing problem score (IPS), and externalizing problem score (EPS). Predictive values of definitely (DA) and mildly (MA) abnormal GMs were calculated. DA GMs at 2 weeks were associated with complex MND at 18 months and atypical TPS and IPS at 4 years (all p<0.05). Sensitivity and positive predictive value of DA GMs at 2 weeks were rather low (13%-60%); specificity and negative predictive value were excellent (92%-99%). DA GMs at 3 months occurred too infrequently to calculate prediction. MA GMs were not associated with outcome. GM quality as a single predictor for complex MND and behavioral problems at preschool age has limited clinical value in children at low risk for developmental disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Factors of Problem-Solving Competency in a Virtual Chemistry Environment: The Role of Metacognitive Knowledge about Strategies

    ERIC Educational Resources Information Center

    Scherer, Ronny; Tiemann, Rudiger

    2012-01-01

    The ability to solve complex scientific problems is regarded as one of the key competencies in science education. Until now, research on problem solving focused on the relationship between analytical and complex problem solving, but rarely took into account the structure of problem-solving processes and metacognitive aspects. This paper,…

  2. Determining the Effects of Cognitive Style, Problem Complexity, and Hypothesis Generation on the Problem Solving Ability of School-Based Agricultural Education Students

    ERIC Educational Resources Information Center

    Blackburn, J. Joey; Robinson, J. Shane

    2016-01-01

    The purpose of this experimental study was to assess the effects of cognitive style, problem complexity, and hypothesis generation on the problem solving ability of school-based agricultural education students. Problem solving ability was defined as time to solution. Kirton's Adaption-Innovation Inventory was employed to assess students' cognitive…

  3. Multichromosomal median and halving problems under different genomic distances

    PubMed Central

    Tannier, Eric; Zheng, Chunfang; Sankoff, David

    2009-01-01

    Background Genome median and genome halving are combinatorial optimization problems that aim at reconstructing ancestral genomes as well as the evolutionary events leading from the ancestor to extant species. Exploring complexity issues is a first step towards devising efficient algorithms. The complexity of the median problem for unichromosomal genomes (permutations) has been settled for both the breakpoint distance and the reversal distance. Although the multichromosomal case has often been assumed to be a simple generalization of the unichromosomal case, it is also a relaxation so that complexity in this context does not follow from existing results, and is open for all distances. Results We settle here the complexity of several genome median and halving problems, including a surprising polynomial result for the breakpoint median and guided halving problems in genomes with circular and linear chromosomes, showing that the multichromosomal problem is actually easier than the unichromosomal problem. Still other variants of these problems are NP-complete, including the DCJ double distance problem, previously mentioned as an open question. We list the remaining open problems. Conclusion This theoretical study clears up a wide swathe of the algorithmical study of genome rearrangements with multiple multichromosomal genomes. PMID:19386099

  4. The Evolution of Biological Complexity in Digital Organisms

    NASA Astrophysics Data System (ADS)

    Ofria, Charles

    2013-03-01

    When Darwin first proposed his theory of evolution by natural selection, he realized that it had a problem explaining the origins of traits of ``extreme perfection and complication'' such as the vertebrate eye. Critics of Darwin's theory have latched onto this perceived flaw as a proof that Darwinian evolution is impossible. In anticipation of this issue, Darwin described the perfect data needed to understand this process, but lamented that such data are ``scarcely ever possible'' to obtain. In this talk, I will discuss research where we use populations of digital organisms (self-replicating and evolving computer programs) to elucidate the genetic and evolutionary processes by which new, highly-complex traits arise, drawing inspiration directly from Darwin's wistful thinking and hypotheses. During the process of evolution in these fully-transparent computational environments we can measure the incorporation of new information into the genome, a process akin to a natural Maxwell's Demon, and identify the original source of any such information. We show that, as Darwin predicted, much of the information used to encode a complex trait was already in the genome as part of simpler evolved traits, and that many routes must be possible for a new complex trait to have a high probability of successfully evolving. In even more extreme examples of the evolution of complexity, we are now using these same principles to examine the evolutionary dynamics the drive major transitions in evolution; that is transitions to higher-levels of organization, which are some of the most complex evolutionary events to occur in nature. Finally, I will explore some of the implications of this research to other aspects of evolutionary biology and as well as ways that these evolutionary principles can be applied toward solving computational and engineering problems.

  5. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  6. Generative model selection using a scalable and size-independent complex network classifier

    NASA Astrophysics Data System (ADS)

    Motallebi, Sadegh; Aliakbary, Sadegh; Habibi, Jafar

    2013-12-01

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named "Generative Model Selection for Complex Networks," outperforms existing methods with respect to accuracy, scalability, and size-independence.

  7. The clinical management of diabetic foot in the elderly and medico-legal implications.

    PubMed

    Terranova, Claudio; Bruttocao, Andrea

    2013-10-01

    Diabetic foot is a complex and challenging pathological state, characterized by high complexity of management, morbidity and mortality. The elderly present peculiar problems which interfere on one hand with the patient's compliance and on the other with their diagnostic-therapeutic management. Difficult clinical management may result in medico-legal problems, with criminal and civil consequences. In this context, the authors present a review of the literature, analysing aspects concerning the diagnosis and treatment of diabetic foot in the elderly which may turn out to be a source of professional responsibility. Analysis of these aspects provides an opportunity to discuss elements important not only for clinicians and medical workers but also experts (judges, lawyers, medico-legal experts) who must evaluate hypotheses of professional responsibility concerning diabetic foot in the elderly.

  8. Two fast approximate wavelet algorithms for image processing, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Wickerhauser, Mladen V.

    1994-07-01

    We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.

  9. Simulation Study on Missile Penetration Based on LS - DYNA

    NASA Astrophysics Data System (ADS)

    Tang, Jue; Sun, Xinli

    2017-12-01

    Penetrating the shell armor is an effective means of destroying hard targets with multiple layers of protection. The penetration process is a high-speed impact dynamics research category, involving high pressure, high temperature, high speed and internal material damage, including plugging, penetration, spalling, caving, splashing and other complex forms, therefore, Analysis is one of the difficulties in the study of impact dynamics. In this paper, the Lagrang algorithm and the SPH algorithm are used to analyze the penetrating steel plate, and the penetration model of the rocket penetrating the steel plate, the failure mode of the steel plate and the missile and the advantages and disadvantages of Lagrang algorithm and SPH algorithm in the simulation of high-speed collision problem are analyzed and compared, which provides a reference for the study of simulation collision problem.

  10. Modern technologies of processing municipal solid waste: investing in the future

    NASA Astrophysics Data System (ADS)

    Rumyantseva, A.; Berezyuk, M.; Savchenko, N.; Rumyantseva, E.

    2017-06-01

    The problem of effective municipal solid waste (MSW) management is known to all the municipal entities of the Russian Federation. The problem is multifaceted and complex. The article analyzes the dynamics of municipal solid waste formation and its utilization within the territory of the EU and Russia. The authors of the paper suggest a project of a plant for processing municipal solid waste into a combustible gas with the help of high temperature pyrolysis. The main indicators of economic efficiency are calculated.

  11. Application of furniture images selection based on neural network

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Gao, Wenwen; Wang, Ying

    2018-05-01

    In the construction of 2 million furniture image databases, aiming at the problem of low quality of database, a combination of CNN and Metric learning algorithm is proposed, which makes it possible to quickly and accurately remove duplicate and irrelevant samples in the furniture image database. Solve problems that images screening method is complex, the accuracy is not high, time-consuming is long. Deep learning algorithm achieve excellent image matching ability in actual furniture retrieval applications after improving data quality.

  12. Comparing the basins of attraction for several methods in the circular Sitnikov problem with spheroid primaries

    NASA Astrophysics Data System (ADS)

    Zotos, Euaggelos E.

    2018-06-01

    The circular Sitnikov problem, where the two primary bodies are prolate or oblate spheroids, is numerically investigated. In particular, the basins of convergence on the complex plane are revealed by using a large collection of numerical methods of several order. We consider four cases, regarding the value of the oblateness coefficient which determines the nature of the roots (attractors) of the system. For all cases we use the iterative schemes for performing a thorough and systematic classification of the nodes on the complex plane. The distribution of the iterations as well as the probability and their correlations with the corresponding basins of convergence are also discussed. Our numerical computations indicate that most of the iterative schemes provide relatively similar convergence structures on the complex plane. However, there are some numerical methods for which the corresponding basins of attraction are extremely complicated with highly fractal basin boundaries. Moreover, it is proved that the efficiency strongly varies between the numerical methods.

  13. Effective algorithm for solving complex problems of production control and of material flows control of industrial enterprise

    NASA Astrophysics Data System (ADS)

    Mezentsev, Yu A.; Baranova, N. V.

    2018-05-01

    A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.

  14. Computational complexities and storage requirements of some Riccati equation solvers

    NASA Technical Reports Server (NTRS)

    Utku, Senol; Garba, John A.; Ramesh, A. V.

    1989-01-01

    The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.

  15. Using a Semi-Realistic Database to Support a Database Course

    ERIC Educational Resources Information Center

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  16. 75 FR 41501 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-16

    ..., mental retardation or learning disabilities, and behavioral problems, as well as malformations in many... degree of mental retardation and learning disability. Biochemically, SLOS is caused by disruption of the..., but the mechanism of resistance is highly complex; this mouse model will be useful in learning the...

  17. Building a Greener Future

    ERIC Educational Resources Information Center

    Baldwin, Blake; Koenig, Kathleen; Van der Bent, Andries

    2016-01-01

    Integrating engineering and science in the classroom can be challenging, and creating authentic experiences that address real-world problems is often even more difficult. "A Framework for K-12 Science Education" (NRC 2012), however, calls for high school graduates to be able to undertake more complex engineering design projects related…

  18. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE PAGES

    Nicholson, Bethany; Siirola, John

    2017-11-11

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  19. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  20. [Questions concerning humanitarian action].

    PubMed

    Simonnot, C

    2002-01-01

    Although development of humanitarian action is rooted historical events, the dynamics behind today's international relief organizations can only be understood within the context of the modern world. Relief organizations are currently confronted with major challenges and paradoxes. The challenges include the need to enhance professionalization and standardization of assistance operations and exposure to greater risks. The paradoxes involve the need to implement complex, highly publicized programs in a simplistic manner and problems involved in managing the complex relationship between relief workers and victims, tainted with the almighty powers of the actors.

  1. Some observations on a new numerical method for solving Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1981-01-01

    An explicit-implicit technique for solving Navier-Stokes equations is described which, is much less complex than other implicit methods. It is used to solve a complex, two-dimensional, steady-state, supersonic-flow problem. The computational efficiency of the method and the quality of the solution obtained from it at high Courant-Friedrich-Lewy (CFL) numbers are discussed. Modifications are discussed and certain observations are made about the method which may be helpful in using it successfully.

  2. Evidence of Critical Thinking in High School Humanities Classrooms (Evidencias del Pensamiento Crítico en las Clases de Ciencias Humanas en Bachillerato)

    ERIC Educational Resources Information Center

    Alfonso, David Vargas

    2015-01-01

    Critical thinking skills (CTS) are a group of higher order thinking abilities related with complex processes of learning like contextualization or problem solving. This exploratory research study identified whether critical thinking skills were present in high school humanities classrooms. The study was carried out in a private school in Bogotá,…

  3. Does constructive neutral evolution play an important role in the origin of cellular complexity? Making sense of the origins and uses of biological complexity.

    PubMed

    Speijer, Dave

    2011-05-01

    Recently, constructive neutral evolution has been touted as an important concept for the understanding of the emergence of cellular complexity. It has been invoked to help explain the development and retention of, amongst others, RNA splicing, RNA editing and ribosomal and mitochondrial respiratory chain complexity. The theory originated as a welcome explanation of isolated small scale cellular idiosyncrasies and as a reaction to 'overselectionism'. Here I contend, that in its extended form, it has major conceptual problems, can not explain observed patterns of complex processes, is too easily dismissive of alternative selectionist models, underestimates the creative force of complexity as such, and--if seen as a major evolutionary mechanism for all organisms--could stifle further thought regarding the evolution of highly complex biological processes. Copyright © 2011 WILEY Periodicals, Inc.

  4. Complexity of GPs' explanations about mental health problems: development, reliability, and validity of a measure

    PubMed Central

    Cape, John; Morris, Elena; Burd, Mary; Buszewicz, Marta

    2008-01-01

    Background How GPs understand mental health problems determines their treatment choices; however, measures describing GPs' thinking about such problems are not currently available. Aim To develop a measure of the complexity of GP explanations of common mental health problems and to pilot its reliability and validity. Design of study A qualitative development of the measure, followed by inter-rater reliability and validation pilot studies. Setting General practices in North London. Method Vignettes of simulated consultations with patients with mental health problems were videotaped, and an anchored measure of complexity of psychosocial explanation in response to these vignettes was developed. Six GPs, four psychologists, and two lay people viewed the vignettes. Their responses were rated for complexity, both using the anchored measure and independently by two experts in primary care mental health. In a second reliability and revalidation study, responses of 50 GPs to two vignettes were rated for complexity. The GPs also completed a questionnaire to determine their interest and training in mental health, and they completed the Depression Attitudes Questionnaire. Results Inter-rater reliability of the measure of complexity of explanation in both pilot studies was satisfactory (intraclass correlation coefficient = 0.78 and 0.72). The measure correlated with expert opinion as to what constitutes a complex explanation, and the responses of psychologists, GPs, and lay people differed in measured complexity. GPs with higher complexity scores had greater interest, more training in mental health, and more positive attitudes to depression. Conclusion Results suggest that the complexity of GPs' psychosocial explanations about common mental health problems can be reliably and validly assessed by this new standardised measure. PMID:18505616

  5. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE PAGES

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    2017-10-13

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  6. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  7. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.

    PubMed

    Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J

    2017-12-01

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.

  8. Clinical Problem Analysis (CPA): A Systematic Approach To Teaching Complex Medical Problem Solving.

    ERIC Educational Resources Information Center

    Custers, Eugene J. F. M.; Robbe, Peter F. De Vries; Stuyt, Paul M. J.

    2000-01-01

    Discusses clinical problem analysis (CPA) in medical education, an approach to solving complex clinical problems. Outlines the five step CPA model and examines the value of CPA's content-independent (methodical) approach. Argues that teaching students to use CPA will enable them to avoid common diagnostic reasoning errors and pitfalls. Compares…

  9. Preliminary Analysis of Perfusionists’ Strategies for Managing Routine and Failure Mode Scenarios in Cardiopulmonary Bypass

    PubMed Central

    Power, Gerald; Miller, Anne

    2007-01-01

    Abstract: Cardiopulmonary bypass (CPB) is a complex task requiring high levels of practitioner expertise. Although some education standards exist, few are based on an analysis of perfusionists’ problem-solving needs. This study shows the efficacy of work domain analysis (WDA) as a framework for analyzing perfusionists’ conceptualization and problem-solving strategies. A WDA model of a CPB circuit was developed. A high-fidelity CPB simulator (Manbit) was used to present routine and oxygenator failure scenarios to six proficient perfusionists. The video-cued recall technique was used to elicit perfusionists’ conceptualization strategies. The resulting recall transcripts were coded using the WDA model and analyzed for associations between task completion times and patterns of conceptualization. The WDA model developed was successful in being able to account for and describe the thought process followed by each participant. It was also shown that, although there was no correlation between experience with CPB and ability to change an oxygenator, there was a link between the between specific thought patterns and the efficiency in undertaking this task. Simulators are widely used in many fields of human endeavor, and in this research, the attempt was made to use WDA to gain insights into the complexities of the human thought process when engaged in the complex task of conducting CPB. The assumption that experience equates with ability is challenged, and rather, it is shown that thought process is a more significant determinant of success when engaged in complex tasks. WDA analysis in combination with a CPB simulator may be used to elucidate successful strategies for completing complex tasks. PMID:17972450

  10. An efficient hybrid technique in RCS predictions of complex targets at high frequencies

    NASA Astrophysics Data System (ADS)

    Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe

    2017-09-01

    Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.

  11. High field hyperpolarization-EXSY experiment for fast determination of dissociation rates in SABRE complexes

    NASA Astrophysics Data System (ADS)

    Hermkens, Niels K. J.; Feiters, Martin C.; Rutjes, Floris P. J. T.; Wijmenga, Sybren S.; Tessari, Marco

    2017-03-01

    SABRE (Signal Amplification By Reversible Exchange) is a nuclear spin hyperpolarization technique based on the reversible concurrent binding of small molecules and para-hydrogen (p-H2) to an iridium metal complex in solution. At low magnetic field, spontaneous conversion of p-H2 spin order to enhanced longitudinal magnetization of the nuclear spins of the other ligands occurs. Subsequent complex dissociation results in hyperpolarized substrate molecules in solution. The lifetime of this complex plays a crucial role in attained SABRE NMR signal enhancements. Depending on the ligands, vastly different dissociation rates have been previously measured using EXSY or selective inversion experiments. However, both these approaches are generally time-consuming due to the long recycle delays (up to 2 min) necessary to reach thermal equilibrium for the nuclear spins of interest. In the cases of dilute solutions, signal averaging aggravates the problem, further extending the experimental time. Here, a new approach is proposed based on coherent hyperpolarization transfer to substrate protons in asymmetric complexes at high magnetic field. We have previously shown that such asymmetric complexes are important for application of SABRE to dilute substrates. Our results demonstrate that a series of high sensitivity EXSY spectra can be collected in a short experimental time thanks to the NMR signal enhancement and much shorter recycle delay.

  12. Complex Problem Solving in a Workplace Setting.

    ERIC Educational Resources Information Center

    Middleton, Howard

    2002-01-01

    Studied complex problem solving in the hospitality industry through interviews with six office staff members and managers. Findings show it is possible to construct a taxonomy of problem types and that the most common approach can be termed "trial and error." (SLD)

  13. Moneymed: a game to develop management skills in general practice

    PubMed Central

    Essex, B.; Jackson, R. N.

    1981-01-01

    A game has been developed to train people in the financial and administrative skills needed for effective general practice management. These skills cover a wide range of legal, economic, administrative and personnel problems encountered in general practice. Thirty-four trainees and six trainers showed a highly significant improvement in knowledge and problem-solving skills after playing the game. The format and design of the game allow the problem type, complexity and solution to vary and to be readily updated. So far, this seems to be one of the most effective instruments yet developed for learning these skills. Imagesp736-a PMID:7338867

  14. Beyond rules: The next generation of expert systems

    NASA Technical Reports Server (NTRS)

    Ferguson, Jay C.; Wagner, Robert E.

    1987-01-01

    The PARAGON Representation, Management, and Manipulation system is introduced. The concepts of knowledge representation, knowledge management, and knowledge manipulation are combined in a comprehensive system for solving real world problems requiring high levels of expertise in a real time environment. In most applications the complexity of the problem and the representation used to describe the domain knowledge tend to obscure the information from which solutions are derived. This inhibits the acquisition of domain knowledge verification/validation, places severe constraints on the ability to extend and maintain a knowledge base while making generic problem solving strategies difficult to develop. A unique hybrid system was developed to overcome these traditional limitations.

  15. Insight and analysis problem solving in microbes to machines.

    PubMed

    Clark, Kevin B

    2015-11-01

    A key feature for obtaining solutions to difficult problems, insight is oftentimes vaguely regarded as a special discontinuous intellectual process and/or a cognitive restructuring of problem representation or goal approach. However, this nearly century-old state of art devised by the Gestalt tradition to explain the non-analytical or non-trial-and-error, goal-seeking aptitude of primate mentality tends to neglect problem-solving capabilities of lower animal phyla, Kingdoms other than Animalia, and advancing smart computational technologies built from biological, artificial, and composite media. Attempting to provide an inclusive, precise definition of insight, two major criteria of insight, discontinuous processing and problem restructuring, are here reframed using terminology and statistical mechanical properties of computational complexity classes. Discontinuous processing becomes abrupt state transitions in algorithmic/heuristic outcomes or in types of algorithms/heuristics executed by agents using classical and/or quantum computational models. And problem restructuring becomes combinatorial reorganization of resources, problem-type substitution, and/or exchange of computational models. With insight bounded by computational complexity, humans, ciliated protozoa, and complex technological networks, for example, show insight when restructuring time requirements, combinatorial complexity, and problem type to solve polynomial and nondeterministic polynomial decision problems. Similar effects are expected from other problem types, supporting the idea that insight might be an epiphenomenon of analytical problem solving and consequently a larger information processing framework. Thus, this computational complexity definition of insight improves the power, external and internal validity, and reliability of operational parameters with which to classify, investigate, and produce the phenomenon for computational agents ranging from microbes to man-made devices. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. High resolution clear native electrophoresis for in-gel functional assays and fluorescence studies of membrane protein complexes.

    PubMed

    Wittig, Ilka; Karas, Michael; Schägger, Hermann

    2007-07-01

    Clear native electrophoresis and blue native electrophoresis are microscale techniques for the isolation of membrane protein complexes. The Coomassie Blue G-250 dye, used in blue native electrophoresis, interferes with in-gel fluorescence detection and in-gel catalytic activity assays. This problem can be overcome by omitting the dye in clear native electrophoresis. However, clear native electrophoresis suffers from enhanced protein aggregation and broadening of protein bands during electrophoresis and therefore has been used rarely. To preserve the advantages of both electrophoresis techniques we substituted Coomassie dye in the cathode buffer of blue native electrophoresis by non-colored mixtures of anionic and neutral detergents. Like Coomassie dye, these mixed micelles imposed a charge shift on the membrane proteins to enhance their anodic migration and improved membrane protein solubility during electrophoresis. This improved clear native electrophoresis offers a high resolution of membrane protein complexes comparable to that of blue native electrophoresis. We demonstrate the superiority of high resolution clear native electrophoresis for in-gel catalytic activity assays of mitochondrial complexes I-V. We present the first in-gel histochemical staining protocol for respiratory complex III. Moreover we demonstrate the special advantages of high resolution clear native electrophoresis for in-gel detection of fluorescent labeled proteins labeled by reactive fluorescent dyes and tagged by fluorescent proteins. The advantages of high resolution clear native electrophoresis make this technique superior for functional proteomics analyses.

  17. Translating concepts of complexity to the field of ergonomics.

    PubMed

    Walker, Guy H; Stanton, Neville A; Salmon, Paul M; Jenkins, Daniel P; Rafferty, Laura

    2010-10-01

    Since 1958 more than 80 journal papers from the mainstream ergonomics literature have used either the words 'complex' or 'complexity' in their titles. Of those, more than 90% have been published in only the past 20 years. This observation communicates something interesting about the way in which contemporary ergonomics problems are being understood. The study of complexity itself derives from non-linear mathematics but many of its core concepts have found analogies in numerous non-mathematical domains. Set against this cross-disciplinary background, the current paper aims to provide a similar initial mapping to the field of ergonomics. In it, the ergonomics problem space, complexity metrics and powerful concepts such as emergence raise complexity to the status of an important contingency factor in achieving a match between ergonomics problems and ergonomics methods. The concept of relative predictive efficiency is used to illustrate how this match could be achieved in practice. What is clear overall is that a major source of, and solution to, complexity are the humans in systems. Understanding complexity on its own terms offers the potential to leverage disproportionate effects from ergonomics interventions and to tighten up the often loose usage of the term in the titles of ergonomics papers. STATEMENT OF RELEVANCE: This paper reviews and discusses concepts from the study of complexity and maps them to ergonomics problems and methods. It concludes that humans are a major source of and solution to complexity in systems and that complexity is a powerful contingency factor, which should be considered to ensure that ergonomics approaches match the true nature of ergonomics problems.

  18. Assessment and Treatment of Personality Disorders: A Behavioral Perspective

    ERIC Educational Resources Information Center

    Nelson-Gray, Rosemery O.; Lootens, Christopher M.; Mitchell, John T.; Robertson, Christopher D.; Hundt, Natalie E.; Kimbrel, Nathan A.

    2009-01-01

    Personality disorders are complex and highly challenging to treatment providers; yet, for clients with these problems, there exist very few treatment options that have been supported by research. Given the lack of empirically-supported therapies for personality disorders, it can be difficult to make treatment decisions for this population. The…

  19. Authenticity of Mathematical Modeling

    ERIC Educational Resources Information Center

    Tran, Dung; Dougherty, Barbara J.

    2014-01-01

    Some students leave high school never quite sure of the relevancy of the mathematics they have learned. They fail to see links between school mathematics and the mathematics of everyday life that requires thoughtful decision making and often complex problem solving. Is it possible to bridge the gap between school mathematics and the mathematics in…

  20. Advanced Interactive Web Technologies in Industry Training.

    ERIC Educational Resources Information Center

    Vassileva, Tania; Astinov, Ilario; Bojkov, Dimitar; Tchoumatchenko, Vassiliy; Scholten, Ulrich; Furnadziev, Ivan

    Today, faced with the problems of global competition, increasing costs, and complex production engineering, a company can only be successfully managed if the employees are motivated and highly qualified. To cope with this demand the new educational scheme for cost-effective retraining, lifelong learning and distance education at the workplace…

  1. Big Data Goes Personal: Privacy and Social Challenges

    ERIC Educational Resources Information Center

    Bonomi, Luca

    2015-01-01

    The Big Data phenomenon is posing new challenges in our modern society. In addition to requiring information systems to effectively manage high-dimensional and complex data, the privacy and social implications associated with the data collection, data analytics, and service requirements create new important research problems. First, the high…

  2. Analysis of a combat problem - The turret game

    NASA Technical Reports Server (NTRS)

    Ardema, M.; Heymann, M.; Rajan, N.

    1987-01-01

    The turret game is defined and solved to illustrate the nature of games of combat. This game represents a highly simplified version of air combat, yet it is sufficiently complex so as to exhibit a rich variety of combat phenomena. A review of the formulation of delta-combat games is included.

  3. Academic Procrastination: Frequency and Cognitive-Behavioral Correlates.

    ERIC Educational Resources Information Center

    Solomon, Laura J.; Rothblum, Esther D.

    1984-01-01

    Investigated the frequency of and reasons for college students' (N=342) procrastination on academic tasks. A high percentage of students reported problems with procrastination. Results indicated that procrastination is not solely a deficit in study habits or time management but involves a complex interaction of behavioral, cognitive, and affective…

  4. Creative Thinking: Processes, Strategies, and Knowledge

    ERIC Educational Resources Information Center

    Mumford, Michael D.; Medeiros, Kelsey E.; Partlow, Paul J.

    2012-01-01

    Creative achievements are the basis for progress in our world. Although creative achievement is influenced by many variables, the basis for creativity is held to lie in the generation of high-quality, original, and elegant solutions to complex, novel, ill-defined problems. In the present effort, we examine the cognitive capacities that make…

  5. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  6. Development and Application of Agglomerated Multigrid Methods for Complex Geometries

    NASA Technical Reports Server (NTRS)

    Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.

    2010-01-01

    We report progress in the development of agglomerated multigrid techniques for fully un- structured grids in three dimensions, building upon two previous studies focused on efficiently solving a model diffusion equation. We demonstrate a robust fully-coarsened agglomerated multigrid technique for 3D complex geometries, incorporating the following key developments: consistent and stable coarse-grid discretizations, a hierarchical agglomeration scheme, and line-agglomeration/relaxation using prismatic-cell discretizations in the highly-stretched grid regions. A signi cant speed-up in computer time is demonstrated for a model diffusion problem, the Euler equations, and the Reynolds-averaged Navier-Stokes equations for 3D realistic complex geometries.

  7. Challenges of Developing New Classes of NASA Self-Managing Mission

    NASA Technical Reports Server (NTRS)

    Hinchey, M. G.; Rash, J. I.; Truszkowski, W. F.; Rouff, C. A.; Sterritt, R.

    2005-01-01

    NASA is proposing increasingly complex missions that will require a high degree of autonomy and autonomicity. These missions pose hereto unforeseen problems and raise issues that have not been well-addressed by the community. Assuring success of such missions will require new software development techniques and tools. This paper discusses some of the challenges that NASA and the rest of the software development community are facing in developing these ever-increasingly complex systems. We give an overview of a proposed NASA mission as well as techniques and tools that are being developed to address autonomic management and the complexity issues inherent in these missions.

  8. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  9. Schizophrenia, narrative, and neurocognition: The utility of life-stories in understanding social problem-solving skills.

    PubMed

    Moe, Aubrey M; Breitborde, Nicholas J K; Bourassa, Kyle J; Gallagher, Colin J; Shakeel, Mohammed K; Docherty, Nancy M

    2018-06-01

    Schizophrenia researchers have focused on phenomenological aspects of the disorder to better understand its underlying nature. In particular, development of personal narratives-that is, the complexity with which people form, organize, and articulate their "life stories"-has recently been investigated in individuals with schizophrenia. However, less is known about how aspects of narrative relate to indicators of neurocognitive and social functioning. The objective of the present study was to investigate the association of linguistic complexity of life-story narratives to measures of cognitive and social problem-solving abilities among people with schizophrenia. Thirty-two individuals with a diagnosis of schizophrenia completed a research battery consisting of clinical interviews, a life-story narrative, neurocognitive testing, and a measure assessing multiple aspects of social problem solving. Narrative interviews were assessed for linguistic complexity using computerized technology. The results indicate differential relationships of linguistic complexity and neurocognition to domains of social problem-solving skills. More specifically, although neurocognition predicted how well one could both describe and enact a solution to a social problem, linguistic complexity alone was associated with accurately recognizing that a social problem had occurred. In addition, linguistic complexity appears to be a cognitive factor that is discernible from other broader measures of neurocognition. Linguistic complexity may be more relevant in understanding earlier steps of the social problem-solving process than more traditional, broad measures of cognition, and thus is relevant in conceptualizing treatment targets. These findings also support the relevance of developing narrative-focused psychotherapies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. [Multiple colonic anastomoses in the surgical treatment of short bowel syndrome. A new technique].

    PubMed

    Robledo-Ogazón, Felipe; Becerril-Martínez, Guillermo; Hernández-Saldaña, Víctor; Zavala-Aznar, Marí Luisa; Bojalil-Durán, Luis

    2008-01-01

    Some surgical pathologies eventually require intestinal resection. This may lead to an extended procedure such as leaving 30 cm of proximal jejunum and left and sigmoid colon. One of the most important consequences of this type of resection is "intestinal failure" or short bowel syndrome. This complex syndrome leads to different metabolic and water and acid/base imbalances, as well as nutritional and immunological challenges along with the problem accompanying an abdomen subjected to many surgical procedures and high mortality. Many surgical techniques have been developed to improve quality of life of patients. We designed a non-transplant surgical approach and performed the procedure on two patients with postoperative short bowel syndrome with <40 cm of proximal jejunum and left colon. There are a variety of non-transplant surgical procedures that, due to their complex technique or high mortality rate, have not resolved this important problem. However, the technique we present in this work can be performed by a large number of surgeons. The procedure has a low morbimortality rate and offers the opportunity for better control of metabolic and acid/base balance, intestinal transit and proper nutrition. We consider that this technique offers a new alternative for the complex management required by patients with short bowel syndrome and facilitates their long-term nutritional control.

  11. Informatics Metrics and Measures for a Smart Public Health Systems Approach: Information Science Perspective

    PubMed Central

    Shea, Christopher Michael

    2017-01-01

    Public health informatics is an evolving domain in which practices constantly change to meet the demands of a highly complex public health and healthcare delivery system. Given the emergence of various concepts, such as learning health systems, smart health systems, and adaptive complex health systems, health informatics professionals would benefit from a common set of measures and capabilities to inform our modeling, measuring, and managing of health system “smartness.” Here, we introduce the concepts of organizational complexity, problem/issue complexity, and situational awareness as three codependent drivers of smart public health systems characteristics. We also propose seven smart public health systems measures and capabilities that are important in a public health informatics professional's toolkit. PMID:28167999

  12. Informatics Metrics and Measures for a Smart Public Health Systems Approach: Information Science Perspective.

    PubMed

    Carney, Timothy Jay; Shea, Christopher Michael

    2017-01-01

    Public health informatics is an evolving domain in which practices constantly change to meet the demands of a highly complex public health and healthcare delivery system. Given the emergence of various concepts, such as learning health systems, smart health systems, and adaptive complex health systems, health informatics professionals would benefit from a common set of measures and capabilities to inform our modeling, measuring, and managing of health system "smartness." Here, we introduce the concepts of organizational complexity, problem/issue complexity, and situational awareness as three codependent drivers of smart public health systems characteristics. We also propose seven smart public health systems measures and capabilities that are important in a public health informatics professional's toolkit.

  13. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.

  14. Word Problem Solving in Contemporary Math Education: A Plea for Reading Comprehension Skills Training

    PubMed Central

    Boonen, Anton J. H.; de Koning, Björn B.; Jolles, Jelle; van der Schoot, Menno

    2016-01-01

    Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME. PMID:26925012

  15. Word Problem Solving in Contemporary Math Education: A Plea for Reading Comprehension Skills Training.

    PubMed

    Boonen, Anton J H; de Koning, Björn B; Jolles, Jelle; van der Schoot, Menno

    2016-01-01

    Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME.

  16. Managing search complexity in linguistic geometry.

    PubMed

    Stilman, B

    1997-01-01

    This paper is a new step in the development of linguistic geometry. This formal theory is intended to discover and generalize the inner properties of human expert heuristics, which have been successful in a certain class of complex control systems, and apply them to different systems. In this paper, we investigate heuristics extracted in the form of hierarchical networks of planning paths of autonomous agents. Employing linguistic geometry tools the dynamic hierarchy of networks is represented as a hierarchy of formal attribute languages. The main ideas of this methodology are shown in the paper on two pilot examples of the solution of complex optimization problems. The first example is a problem of strategic planning for the air combat, in which concurrent actions of four vehicles are simulated as serial interleaving moves. The second example is a problem of strategic planning for the space comb of eight autonomous vehicles (with interleaving moves) that requires generation of the search tree of the depth 25 with the branching factor 30. This is beyond the capabilities of modern and conceivable future computers (employing conventional approaches). In both examples the linguistic geometry tools showed deep and highly selective searches in comparison with conventional search algorithms. For the first example a sketch of the proof of optimality of the solution is considered.

  17. Level of Satisfaction of Older Persons with Their General Practitioner and Practice: Role of Complexity of Health Problems

    PubMed Central

    Poot, Antonius J.; den Elzen, Wendy P. J.; Blom, Jeanet W.; Gussekloo, Jacobijn

    2014-01-01

    Background Satisfaction is widely used to evaluate and direct delivery of medical care; a complicated relationship exists between patient satisfaction, morbidity and age. This study investigates the relationships between complexity of health problems and level of patient satisfaction of older persons with their general practitioner (GP) and practice. Methods and Findings This study is embedded in the ISCOPE (Integrated Systematic Care for Older Persons) study. Enlisted patients aged ≥75 years from 59 practices received a written questionnaire to screen for complex health problems (somatic, functional, psychological and social). For 2664 randomly chosen respondents (median age 82 years; 68% female) information was collected on level of satisfaction (satisfied, neutral, dissatisfied) with their GP and general practice, and demographic and clinical characteristics including complexity of health problems. Of all participants, 4% was dissatisfied with their GP care, 59% neutral and 37% satisfied. Between these three categories no differences were observed in age, gender, country of birth or education level. The percentage of participants dissatisfied with their GP care increased from 0.4% in those with 0 problem domains to 8% in those with 4 domains, i.e. having complex health problems (p<0.001). Per additional health domain with problems, the risk of being dissatisfied increased 1.7 times (95% CI 1.4–2.14; p<0.001). This was independent of age, gender, and demographic and clinical parameters (adjusted OR 1.4, 95% CI 1.1–1.8; p = 0.021). Conclusion In older persons, dissatisfaction with general practice is strongly correlated with rising complexity of health problems, independent of age, demographic and clinical parameters. It remains unclear whether complexity of health problems is a patient characteristic influencing the perception of care, or whether the care is unable to handle the demands of these patients. Prospective studies are needed to investigate the causal associations between care organization, patient characteristics, indicators of quality, and patient perceptions. PMID:24710557

  18. Level of satisfaction of older persons with their general practitioner and practice: role of complexity of health problems.

    PubMed

    Poot, Antonius J; den Elzen, Wendy P J; Blom, Jeanet W; Gussekloo, Jacobijn

    2014-01-01

    Satisfaction is widely used to evaluate and direct delivery of medical care; a complicated relationship exists between patient satisfaction, morbidity and age. This study investigates the relationships between complexity of health problems and level of patient satisfaction of older persons with their general practitioner (GP) and practice. This study is embedded in the ISCOPE (Integrated Systematic Care for Older Persons) study. Enlisted patients aged ≥75 years from 59 practices received a written questionnaire to screen for complex health problems (somatic, functional, psychological and social). For 2664 randomly chosen respondents (median age 82 years; 68% female) information was collected on level of satisfaction (satisfied, neutral, dissatisfied) with their GP and general practice, and demographic and clinical characteristics including complexity of health problems. Of all participants, 4% was dissatisfied with their GP care, 59% neutral and 37% satisfied. Between these three categories no differences were observed in age, gender, country of birth or education level. The percentage of participants dissatisfied with their GP care increased from 0.4% in those with 0 problem domains to 8% in those with 4 domains, i.e. having complex health problems (p<0.001). Per additional health domain with problems, the risk of being dissatisfied increased 1.7 times (95% CI 1.4-2.14; p<0.001). This was independent of age, gender, and demographic and clinical parameters (adjusted OR 1.4, 95% CI 1.1-1.8; p = 0.021). In older persons, dissatisfaction with general practice is strongly correlated with rising complexity of health problems, independent of age, demographic and clinical parameters. It remains unclear whether complexity of health problems is a patient characteristic influencing the perception of care, or whether the care is unable to handle the demands of these patients. Prospective studies are needed to investigate the causal associations between care organization, patient characteristics, indicators of quality, and patient perceptions.

  19. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays.

    PubMed

    Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok

    2016-01-26

    The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart.

  20. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  1. A novel heuristic algorithm for capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre

    2017-09-01

    The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.

  2. Non-Residential Father-Child Involvement, Interparental Conflict and Mental Health of Children Following Divorce: A Person-Focused Approach.

    PubMed

    Elam, Kit K; Sandler, Irwin; Wolchik, Sharlene; Tein, Jenn-Yun

    2016-03-01

    Variable-centered research has found complex relationships between child well-being and two critical aspects of the post-divorce family environment: the level of non-residential father involvement (i.e., contact and supportive relationship) with their children and the level of conflict between the father and mother. However, these analyses fail to capture individual differences based on distinct patterns of interparental conflict, father support and father contact. Using a person-centered latent profile analysis, the present study examined (1) profiles of non-residential father contact, support, and interparental conflict in the 2 years following divorce (N = 240), when children (49 % female) were between 9 and 12 years of age and (2) differences across profiles in concurrent child adjustment outcomes as well as outcomes 6 years later. Four profiles of father involvement were identified: High Contact-Moderate Conflict-Moderate Support, Low Contact-Moderate Conflict-Low Support, High Conflict-Moderate Contact-Moderate Support, and Low Conflict-Moderate Contact-Moderate Support. Concurrently, children with fathers in the group with high conflict were found to have significantly greater internalizing and externalizing problems compared to all other groups. Six years later, children with fathers in the group with low contact and low support were found to have greater internalizing and externalizing problems compared to children with fathers in the high conflict group, and also greater internalizing problems compared to children with fathers in the low conflict group. These results provide insight into the complex relationship among non-residential fathers' conflict, contact, and support in child adjustment within divorcing families.

  3. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    NASA Astrophysics Data System (ADS)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  4. Developmental association of prosocial behaviour with aggression, anxiety and depression from infancy to preadolescence.

    PubMed

    Nantel-Vivier, Amélie; Pihl, Robert O; Côté, Sylvana; Tremblay, Richard E

    2014-10-01

    Research on associations between children's prosocial behaviour and mental health has provided mixed evidence. The present study sought to describe and predict the joint development of prosocial behaviour with externalizing and internalizing problems (physical aggression, anxiety and depression) from 2 to 11 years of age. Data were drawn from the National Longitudinal Survey of Children and Youth (NLSCY). Biennial prosocial behaviour, physical aggression, anxiety and depression maternal ratings were sought for 10,700 children aged 0 to 9 years at the first assessment point. While a negative association was observed between prosociality and physical aggression, more complex associations emerged with internalizing problems. Being a boy decreased the likelihood of membership in the high prosocial trajectory. Maternal depression increased the likelihood of moderate aggression, but also of joint high prosociality/low aggression. Low family income predicted the joint development of high prosociality with high physical aggression and high depression. Individual differences exist in the association of prosocial behaviour with mental health. While high prosociality tends to co-occur with low levels of mental health problems, high prosociality and internalizing/externalizing problems can co-occur in subgroups of children. Child, mother and family characteristics are predictive of individual differences in prosocial behaviour and mental health development. Mechanisms underlying these associations warrant future investigations. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.

  5. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.

    1991-01-01

    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  6. Understanding the determinants of problem-solving behavior in a complex environment

    NASA Technical Reports Server (NTRS)

    Casner, Stephen A.

    1994-01-01

    It is often argued that problem-solving behavior in a complex environment is determined as much by the features of the environment as by the goals of the problem solver. This article explores a technique to determine the extent to which measured features of a complex environment influence problem-solving behavior observed within that environment. In this study, the technique is used to determine how complex flight deck and air traffic control environment influences the strategies used by airline pilots when controlling the flight path of a modern jetliner. Data collected aboard 16 commercial flights are used to measure selected features of the task environment. A record of the pilots' problem-solving behavior is analyzed to determine to what extent behavior is adapted to the environmental features that were measured. The results suggest that the measured features of the environment account for as much as half of the variability in the pilots' problem-solving behavior and provide estimates on the probable effects of each environmental feature.

  7. On Complex Water Conflicts: Role of Enabling Conditions for Pragmatic Resolution

    NASA Astrophysics Data System (ADS)

    Islam, S.; Choudhury, E.

    2016-12-01

    Many of our current and emerging water problems are interconnected and cross boundaries, domains, scales, and sectors. These boundary crossing water problems are neither static nor linear; but often are interconnected nonlinearly with other problems and feedback. The solution space for these complex problems - involving interdependent variables, processes, actors, and institutions - can't be pre-stated. We need to recognize the disconnect among values, interests, and tools as well as problems, policies, and politics. Scientific and technological solutions are desired for efficiency and reliability, but need to be politically feasible and actionable. Governing and managing complex water problems require difficult tradeoffs in exploring and sharing benefits and burdens through carefully crafted negotiation processes. The crafting of such negotiation process, we argue, constitutes a pragmatic approach to negotiation - one that is based on the identification of enabling conditions - as opposed to mechanistic casual explanations, and rooted in contextual conditions to specify and ensure the principles of equity and sustainability. We will use two case studies to demonstrate the efficacy of the proposed principled pragmatic approcah to address complex water problems.

  8. Improvement of DGGE analysis by modifications of PCR protocols for analysis of microbial community members with low abundance.

    PubMed

    Wang, Yong-Feng; Zhang, Fang-Qiu; Gu, Ji-Dong

    2014-06-01

    Denaturing gradient gel electrophoresis (DGGE) is a powerful technique to reveal the community structures and composition of microorganisms in complex natural environments and samples. However, positive and reproducible polymerase chain reaction (PCR) products, which are difficult to acquire for some specific samples due to low abundance of the target microorganisms, significantly impair the effective applications of DGGE. Thus, nested PCR is often introduced to generate positive PCR products from the complex samples, but one problem is also introduced: The total number of thermocycling in nested PCR is usually unacceptably high, which results in skewed community structures by generation of random or mismatched PCR products on the DGGE gel, and this was demonstrated in this study. Furthermore, nested PCR could not resolve the uneven representative issue with PCR products of complex samples with unequal richness of microbial population. In order to solve the two problems in nested PCR, the general protocol was modified and improved in this study. Firstly, a general PCR procedure was used to amplify the target genes with the PCR primers without any guanine cytosine (GC) clamp, and then, the resultant PCR products were purified and diluted to 0.01 μg ml(-1). Subsequently, the diluted PCR products were utilized as templates to amplify again with the same PCR primers with the GC clamp for 17 cycles, and the products were finally subjected to DGGE analysis. We demonstrated that this is a much more reliable approach to obtain a high quality DGGE profile with high reproducibility. Thus, we recommend the adoption of this improved protocol in analyzing microorganisms of low abundance in complex samples when applying the DGGE fingerprinting technique to avoid biased results.

  9. Amoeba-inspired nanoarchitectonic computing implemented using electrical Brownian ratchets.

    PubMed

    Aono, M; Kasai, S; Kim, S-J; Wakabayashi, M; Miwa, H; Naruse, M

    2015-06-12

    In this study, we extracted the essential spatiotemporal dynamics that allow an amoeboid organism to solve a computationally demanding problem and adapt to its environment, thereby proposing a nature-inspired nanoarchitectonic computing system, which we implemented using a network of nanowire devices called 'electrical Brownian ratchets (EBRs)'. By utilizing the fluctuations generated from thermal energy in nanowire devices, we used our system to solve the satisfiability problem, which is a highly complex combinatorial problem related to a wide variety of practical applications. We evaluated the dependency of the solution search speed on its exploration parameter, which characterizes the fluctuation intensity of EBRs, using a simulation model of our system called 'AmoebaSAT-Brownian'. We found that AmoebaSAT-Brownian enhanced the solution searching speed dramatically when we imposed some constraints on the fluctuations in its time series and it outperformed a well-known stochastic local search method. These results suggest a new computing paradigm, which may allow high-speed problem solving to be implemented by interacting nanoscale devices with low power consumption.

  10. Mathematical model to select the optimal alternative for an integral plan to desertification and erosion control for the Chaco Area in Salta Province (Argentine)

    NASA Astrophysics Data System (ADS)

    Grau, J. B.; Anton, J. M.; Tarquis, A. M.; Colombo, F.; de Los Rios, L.; Cisneros, J. M.

    2010-04-01

    Multi-criteria Decision Analysis (MCDA) is concerned with identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision. These decisions are difficult because the complexity of the system or because of determining the optimal situation or behavior. This work will illustrate how MCDA is applied in practice to a complex problem to resolve such us soil erosion and degradation. Desertification is a global problem and recently it has been studied in several forums as ONU that literally says: "Desertification has a very high incidence in the environmental and food security, socioeconomic stability and world sustained development". Desertification is the soil quality loss and one of FAO's most important preoccupations as hunger in the world is increasing. Multiple factors are involved of diverse nature related to: natural phenomena (water and wind erosion), human activities linked to soil and water management, and others not related to the former. In the whole world this problem exists, but its effects and solutions are different. It is necessary to take into account economical, environmental, cultural and sociological criteria. A multi-criteria model to select among different alternatives to prepare an integral plan to ameliorate or/and solve this problem in each area has been elaborated taking in account eight criteria and six alternatives. Six sub zones have been established following previous studies and in each one the initial matrix and weights have been defined to apply on different criteria. Three Multicriteria Decision Methods have been used for the different sub zones: ELECTRE, PROMETHEE and AHP. The results show a high level of consistency among the three different multicriteria methods despite the complexity of the system studied. The methods are described for La Estrella sub zone, indicating election of weights, Initial Matrixes, the MATHCAD8 algorithms used for PROMETHEE, and the Graph of Expert Choice showing the results of AHP. A brief schema of the actions recommended for each of the six different sub zones is reported in Conclusions, with "We can combine Autochthonous and High Value Forest" for La Estrella.

  11. New Developments of Computational Fluid Dynamics and Their Applications to Practical Engineering Problems

    NASA Astrophysics Data System (ADS)

    Chen, Hudong

    2001-06-01

    There have been considerable advances in Lattice Boltzmann (LB) based methods in the last decade. By now, the fundamental concept of using the approach as an alternative tool for computational fluid dynamics (CFD) has been substantially appreciated and validated in mainstream scientific research and in industrial engineering communities. Lattice Boltzmann based methods possess several major advantages: a) less numerical dissipation due to the linear Lagrange type advection operator in the Boltzmann equation; b) local dynamic interactions suitable for highly parallel processing; c) physical handling of boundary conditions for complicated geometries and accurate control of fluxes; d) microscopically consistent modeling of thermodynamics and of interface properties in complex multiphase flows. It provides a great opportunity to apply the method to practical engineering problems encountered in a wide range of industries from automotive, aerospace to chemical, biomedical, petroleum, nuclear, and others. One of the key challenges is to extend the applicability of this alternative approach to regimes of highly turbulent flows commonly encountered in practical engineering situations involving high Reynolds numbers. Over the past ten years, significant efforts have been made on this front at Exa Corporation in developing a lattice Boltzmann based commercial CFD software, PowerFLOW. It has become a useful computational tool for the simulation of turbulent aerodynamics in practical engineering problems involving extremely complex geometries and flow situations, such as in new automotive vehicle designs world wide. In this talk, we present an overall LB based algorithm concept along with certain key extensions in order to accurately handle turbulent flows involving extremely complex geometries. To demonstrate the accuracy of turbulent flow simulations, we provide a set of validation results for some well known academic benchmarks. These include straight channels, backward-facing steps, flows over a curved hill and typical NACA airfoils at various angles of attack including prediction of stall angle. We further provide numerous engineering cases, ranging from external aerodynamics around various car bodies to internal flows involved in various industrial devices. We conclude with a discussion of certain future extensions for complex fluids.

  12. The 3D elliptic restricted three-body problem: periodic orbits which bifurcate from limiting restricted problems. Complex instability

    NASA Astrophysics Data System (ADS)

    Ollé, Mercè; Pacha, Joan R.

    1999-11-01

    In the present work we use certain isolated symmetric periodic orbits found in some limiting Restricted Three-Body Problems to obtain, by numerical continuation, families of symmetric periodic orbits of the more general Spatial Elliptic Restricted Three Body Problem. In particular, the Planar Isosceles Restricted Three Body Problem, the Sitnikov Problem and the MacMillan problem are considered. A stability study for the periodic orbits of the families obtained - specially focused to detect transitions to complex instability - is also made.

  13. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  14. Complex systems in metabolic engineering.

    PubMed

    Winkler, James D; Erickson, Keesha; Choudhury, Alaksh; Halweg-Edwards, Andrea L; Gill, Ryan T

    2015-12-01

    Metabolic engineers manipulate intricate biological networks to build efficient biological machines. The inherent complexity of this task, derived from the extensive and often unknown interconnectivity between and within these networks, often prevents researchers from achieving desired performance. Other fields have developed methods to tackle the issue of complexity for their unique subset of engineering problems, but to date, there has not been extensive and comprehensive examination of how metabolic engineers use existing tools to ameliorate this effect on their own research projects. In this review, we examine how complexity affects engineering at the protein, pathway, and genome levels within an organism, and the tools for handling these issues to achieve high-performing strain designs. Quantitative complexity metrics and their applications to metabolic engineering versus traditional engineering fields are also discussed. We conclude by predicting how metabolic engineering practices may advance in light of an explicit consideration of design complexity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The Implementation of Problem-Solving Based Laboratory Activities to Teach the Concept of Simple Harmonic Motion in Senior High School

    NASA Astrophysics Data System (ADS)

    Iradat, R. D.; Alatas, F.

    2017-09-01

    Simple harmonic motion is considered as a relatively complex concept to be understood by students. This study attempts to implement laboratory activities that focus on solving contextual problems related to the concept. A group of senior high school students participated in this pre-experimental method from a group’s pretest-posttest research design. Laboratory activities have had a positive impact on improving students’ scientific skills, such as, formulating goals, conducting experiments, applying laboratory tools, and collecting data. Therefore this study has added to the theoretical and practical knowledge that needs to be considered to teach better complicated concepts in physics learning.

  16. Plants and men in space - A new field in plant physiology

    NASA Technical Reports Server (NTRS)

    Andre, M.; Macelroy, R. D.

    1990-01-01

    Results are presented on a comparison of nutritional values of and human psychological responses to algae and of higher plants considered for growth as food on long-term missions in space, together with the technological complexities of growing these plants. The comparison shows the advantages of higher plants, with results suggesting that a high level of material recycling can be obtained. It is noted that the issue of space gravity may be not a major problem for plants because of the possibility that phototropism can provide an alternative sense of direction. Problems of waste recycling can be solved in association with plant cultivation, and a high degree of autonomy of food production can be obtained.

  17. Split-shot sinker facilitates seton treatment of anal fistulae.

    PubMed

    Awad, M L; Sell, H W; Stahlfeld, K R

    2009-06-01

    The cutting seton is an inexpensive and effective method of treating high complex perianal fistulae. Following placement of the seton, advancement through the external sphincter muscles requires progressive tightening of the seton. The requirement for maintaining the appropriate tension and onset of perianal pressure necrosis are problems frequently encountered using this technique. Using a 3-0 polypropylene suture, a red-rubber catheter, and a nontoxic tin split-shot sinker, we minimized or eliminated these problems. We initially used this technique in one patient with satisfactory results. This technique is technically easy, safe, inexpensive, and efficient, and we are using it in all patients with high perianal fistulae who require a seton.

  18. Complex fuzzy soft expert sets

    NASA Astrophysics Data System (ADS)

    Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak

    2017-04-01

    Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.

  19. Thresholds of Knowledge Development in Complex Problem Solving: A Multiple-Case Study of Advanced Learners' Cognitive Processes

    ERIC Educational Resources Information Center

    Bogard, Treavor; Liu, Min; Chiang, Yueh-hui Vanessa

    2013-01-01

    This multiple-case study examined how advanced learners solved a complex problem, focusing on how their frequency and application of cognitive processes contributed to differences in performance outcomes, and developing a mental model of a problem. Fifteen graduate students with backgrounds related to the problem context participated in the study.…

  20. A linguistic geometry for space applications

    NASA Technical Reports Server (NTRS)

    Stilman, Boris

    1994-01-01

    We develop a formal theory, the so-called Linguistic Geometry, in order to discover the inner properties of human expert heuristics, which were successful in a certain class of complex control systems, and apply them to different systems. This research relies on the formalization of search heuristics of high-skilled human experts which allow for the decomposition of complex system into the hierarchy of subsystems, and thus solve intractable problems reducing the search. The hierarchy of subsystems is represented as a hierarchy of formal attribute languages. This paper includes a formal survey of the Linguistic Geometry, and new example of a solution of optimization problem for the space robotic vehicles. This example includes actual generation of the hierarchy of languages, some details of trajectory generation and demonstrates the drastic reduction of search in comparison with conventional search algorithms.

  1. Identifying an influential spreader from a single seed in complex networks via a message-passing approach

    NASA Astrophysics Data System (ADS)

    Min, Byungjoon

    2018-01-01

    Identifying the most influential spreaders is one of outstanding problems in physics of complex systems. So far, many approaches have attempted to rank the influence of nodes but there is still the lack of accuracy to single out influential spreaders. Here, we directly tackle the problem of finding important spreaders by solving analytically the expected size of epidemic outbreaks when spreading originates from a single seed. We derive and validate a theory for calculating the size of epidemic outbreaks with a single seed using a message-passing approach. In addition, we find that the probability to occur epidemic outbreaks is highly dependent on the location of the seed but the size of epidemic outbreaks once it occurs is insensitive to the seed. We also show that our approach can be successfully adapted into weighted networks.

  2. An evolving systems-based methodology for healthcare planning.

    PubMed

    Warwick, Jon; Bell, Gary

    2007-01-01

    Healthcare planning seems beset with problems at all hierarchical levels. These are caused by the 'soft' nature of many of the issues present in healthcare planning and the high levels of complexity inherent in healthcare services. There has, in recent years, been a move to utilize systems thinking ideas in an effort to gain a better understanding of the forces at work within the healthcare environment and these have had some success. This paper argues that systems-based methodologies can be further enhanced by metrication and modeling which assist in exploring the changed emergent behavior of a system resulting from management intervention. The paper describes the Holon Framework as an evolving systems-based approach that has been used to help clients understand complex systems (in the education domain) that would have application in the analysis of healthcare problems.

  3. Generative model selection using a scalable and size-independent complex network classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motallebi, Sadegh, E-mail: motallebi@ce.sharif.edu; Aliakbary, Sadegh, E-mail: aliakbary@ce.sharif.edu; Habibi, Jafar, E-mail: jhabibi@sharif.edu

    2013-12-15

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree formore » model selection. Our proposed method, which is named “Generative Model Selection for Complex Networks,” outperforms existing methods with respect to accuracy, scalability, and size-independence.« less

  4. Solving Partial Differential Equations on Overlapping Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henshaw, W D

    2008-09-22

    We discuss the solution of partial differential equations (PDEs) on overlapping grids. This is a powerful technique for efficiently solving problems in complex, possibly moving, geometry. An overlapping grid consists of a set of structured grids that overlap and cover the computational domain. By allowing the grids to overlap, grids for complex geometries can be more easily constructed. The overlapping grid approach can also be used to remove coordinate singularities by, for example, covering a sphere with two or more patches. We describe the application of the overlapping grid approach to a variety of different problems. These include the solutionmore » of incompressible fluid flows with moving and deforming geometry, the solution of high-speed compressible reactive flow with rigid bodies using adaptive mesh refinement (AMR), and the solution of the time-domain Maxwell's equations of electromagnetism.« less

  5. A VERSATILE SHARP INTERFACE IMMERSED BOUNDARY METHOD FOR INCOMPRESSIBLE FLOWS WITH COMPLEX BOUNDARIES

    PubMed Central

    Mittal, R.; Dong, H.; Bozkurttas, M.; Najjar, F.M.; Vargas, A.; von Loebbecke, A.

    2010-01-01

    A sharp interface immersed boundary method for simulating incompressible viscous flow past three-dimensional immersed bodies is described. The method employs a multi-dimensional ghost-cell methodology to satisfy the boundary conditions on the immersed boundary and the method is designed to handle highly complex three-dimensional, stationary, moving and/or deforming bodies. The complex immersed surfaces are represented by grids consisting of unstructured triangular elements; while the flow is computed on non-uniform Cartesian grids. The paper describes the salient features of the methodology with special emphasis on the immersed boundary treatment for stationary and moving boundaries. Simulations of a number of canonical two- and three-dimensional flows are used to verify the accuracy and fidelity of the solver over a range of Reynolds numbers. Flow past suddenly accelerated bodies are used to validate the solver for moving boundary problems. Finally two cases inspired from biology with highly complex three-dimensional bodies are simulated in order to demonstrate the versatility of the method. PMID:20216919

  6. The Complex Route to Success: Complex Problem-Solving Skills in the Prediction of University Success

    ERIC Educational Resources Information Center

    Stadler, Matthias J.; Becker, Nicolas; Greiff, Samuel; Spinath, Frank M.

    2016-01-01

    Successful completion of a university degree is a complex matter. Based on considerations regarding the demands of acquiring a university degree, the aim of this paper was to investigate the utility of complex problem-solving (CPS) skills in the prediction of objective and subjective university success (SUS). The key finding of this study was that…

  7. Working with complexity: experiences of caring for mothers seeking residential parenting services in New South Wales, Australia.

    PubMed

    Fowler, Cathrine; Schmied, Virginia; Dickinson, Marie; Dahlen, Hannah Grace

    2017-02-01

    To investigate staff perception of the changing complexity of mothers and infants admitted to two residential parenting services in New South Wales in the decade from 2005-2015. For many mothers with a young child, parenting is difficult and stressful. If parenting occurs within the context of anxiety, mental illness or abuse it often becomes a high-risk situation for the primary caregiver. Residential parenting services provide early nursing intervention before parenting problems escalate and require physical or mental health focused care. A qualitative descriptive design using semi-structured interview questions was used as phase three of a larger study. Data were gathered from 35 child and family health nurses and ten physicians during eight focus groups. Three main themes emerged: (1) dealing with complexity; (2) changing practice; and (3) appropriate knowledge and skills to handle greater complexity. There was a mix of participant opinions about the increasing complexity of the mothers presenting at residential parenting services during the past decade. Some of the nurses and physicians confirmed an increase in complexity of the mothers while several participants proposed that it was linked to their increased psychosocial assessment knowledge and skill. All participants recognised their work had grown in complexity regardless of their perception about the increased complexity of the mothers. Australian residential parenting services have a significant role in supporting mothers and their families who are experiencing parenting difficulties. It frequently provides early intervention that helps minimise later emotional and physical problems. Nurses are well placed to work with and support mothers with complex histories. Acknowledgement is required that this work is stressful and nurses need to be adequately supported and educated to manage the complex presentations of many families. © 2016 John Wiley & Sons Ltd.

  8. Scout: high-performance heterogeneous computing made simple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less

  9. Exploiting Lipid Permutation Symmetry to Compute Membrane Remodeling Free Energies.

    PubMed

    Bubnis, Greg; Risselada, Herre Jelger; Grubmüller, Helmut

    2016-10-28

    A complete physical description of membrane remodeling processes, such as fusion or fission, requires knowledge of the underlying free energy landscapes, particularly in barrier regions involving collective shape changes, topological transitions, and high curvature, where Canham-Helfrich (CH) continuum descriptions may fail. To calculate these free energies using atomistic simulations, one must address not only the sampling problem due to high free energy barriers, but also an orthogonal sampling problem of combinatorial complexity stemming from the permutation symmetry of identical lipids. Here, we solve the combinatorial problem with a permutation reduction scheme to map a structural ensemble into a compact, nondegenerate subregion of configuration space, thereby permitting straightforward free energy calculations via umbrella sampling. We applied this approach, using a coarse-grained lipid model, to test the CH description of bending and found sharp increases in the bending modulus for curvature radii below 10 nm. These deviations suggest that an anharmonic bending term may be required for CH models to give quantitative energetics of highly curved states.

  10. Restricted Complexity Framework for Nonlinear Adaptive Control in Complex Systems

    NASA Astrophysics Data System (ADS)

    Williams, Rube B.

    2004-02-01

    Control law adaptation that includes implicit or explicit adaptive state estimation, can be a fundamental underpinning for the success of intelligent control in complex systems, particularly during subsystem failures, where vital system states and parameters can be impractical or impossible to measure directly. A practical algorithm is proposed for adaptive state filtering and control in nonlinear dynamic systems when the state equations are unknown or are too complex to model analytically. The state equations and inverse plant model are approximated by using neural networks. A framework for a neural network based nonlinear dynamic inversion control law is proposed, as an extrapolation of prior developed restricted complexity methodology used to formulate the adaptive state filter. Examples of adaptive filter performance are presented for an SSME simulation with high pressure turbine failure to support extrapolations to adaptive control problems.

  11. The Evolution of Chemical High-Throughput Experimentation To Address Challenging Problems in Pharmaceutical Synthesis.

    PubMed

    Krska, Shane W; DiRocco, Daniel A; Dreher, Spencer D; Shevlin, Michael

    2017-12-19

    The structural complexity of pharmaceuticals presents a significant challenge to modern catalysis. Many published methods that work well on simple substrates often fail when attempts are made to apply them to complex drug intermediates. The use of high-throughput experimentation (HTE) techniques offers a means to overcome this fundamental challenge by facilitating the rational exploration of large arrays of catalysts and reaction conditions in a time- and material-efficient manner. Initial forays into the use of HTE in our laboratories for solving chemistry problems centered around screening of chiral precious-metal catalysts for homogeneous asymmetric hydrogenation. The success of these early efforts in developing efficient catalytic steps for late-stage development programs motivated the desire to increase the scope of this approach to encompass other high-value catalytic chemistries. Doing so, however, required significant advances in reactor and workflow design and automation to enable the effective assembly and agitation of arrays of heterogeneous reaction mixtures and retention of volatile solvents under a wide range of temperatures. Associated innovations in high-throughput analytical chemistry techniques greatly increased the efficiency and reliability of these methods. These evolved HTE techniques have been utilized extensively to develop highly innovative catalysis solutions to the most challenging problems in large-scale pharmaceutical synthesis. Starting with Pd- and Cu-catalyzed cross-coupling chemistry, subsequent efforts expanded to other valuable modern synthetic transformations such as chiral phase-transfer catalysis, photoredox catalysis, and C-H functionalization. As our experience and confidence in HTE techniques matured, we envisioned their application beyond problems in process chemistry to address the needs of medicinal chemists. Here the problem of reaction generality is felt most acutely, and HTE approaches should prove broadly enabling. However, the quantities of both time and starting materials available for chemistry troubleshooting in this space generally are severely limited. Adapting to these needs led us to invest in smaller predefined arrays of transformation-specific screening "kits" and push the boundaries of miniaturization in chemistry screening, culminating in the development of "nanoscale" reaction screening carried out in 1536-well plates. Grappling with the problem of generality also inspired the exploration of cheminformatics-driven HTE approaches such as the Chemistry Informer Libraries. These next-generation HTE methods promise to empower chemists to run orders of magnitude more experiments and enable "big data" informatics approaches to reaction design and troubleshooting. With these advances, HTE is poised to revolutionize how chemists across both industry and academia discover new synthetic methods, develop them into tools of broad utility, and apply them to problems of practical significance.

  12. Multicategory Composite Least Squares Classifiers

    PubMed Central

    Park, Seo Young; Liu, Yufeng; Liu, Dacheng; Scholl, Paul

    2010-01-01

    Classification is a very useful statistical tool for information extraction. In particular, multicategory classification is commonly seen in various applications. Although binary classification problems are heavily studied, extensions to the multicategory case are much less so. In view of the increased complexity and volume of modern statistical problems, it is desirable to have multicategory classifiers that are able to handle problems with high dimensions and with a large number of classes. Moreover, it is necessary to have sound theoretical properties for the multicategory classifiers. In the literature, there exist several different versions of simultaneous multicategory Support Vector Machines (SVMs). However, the computation of the SVM can be difficult for large scale problems, especially for problems with large number of classes. Furthermore, the SVM cannot produce class probability estimation directly. In this article, we propose a novel efficient multicategory composite least squares classifier (CLS classifier), which utilizes a new composite squared loss function. The proposed CLS classifier has several important merits: efficient computation for problems with large number of classes, asymptotic consistency, ability to handle high dimensional data, and simple conditional class probability estimation. Our simulated and real examples demonstrate competitive performance of the proposed approach. PMID:21218128

  13. Sleep Disorder, Gastrointestinal Problems and Behaviour Problems Seen in Autism Spectrum Disorder Children and Yoga as Therapy: A Descriptive Review

    PubMed Central

    Pradhan, Balaram; Navaneetham, Janardhana

    2016-01-01

    Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder with deficiencies in many developmental milestones during the infantile childhood. Recent researches have shown that apart from behaviour problems, the ASD children also suffer from physiological conditions such as disturbed sleep and gastrointestinal problems that could be the contributing factors to their daytime behaviour problems. Lots of parents have expressed that, lack of sleep among the children have resulted in high levels of stress among the family members particularly among the immediate caretakers which are in most cases the mother of the child. Early behaviour intervention is a norm for ASD children which mainly affect the psychological level. Through this paper, an effort has been made to study the contributions made by yoga in order to mitigate such problems. Yoga is a non-invasive and alternative therapy that brings change in both physiological and psychological level of an individual. High levels of stress among the caretakers of these children could make them susceptible to non-communicable diseases such as hypertension, diabetes, arthritis etc. Parental based yoga intervention can be more effective for both children and parents and subsequently to the entire family. PMID:28050484

  14. A complexity theory model in science education problem solving: random walks for working memory and mental capacity.

    PubMed

    Stamovlasis, Dimitrios; Tsaparlis, Georgios

    2003-07-01

    The present study examines the role of limited human channel capacity from a science education perspective. A model of science problem solving has been previously validated by applying concepts and tools of complexity theory (the working memory, random walk method). The method correlated the subjects' rank-order achievement scores in organic-synthesis chemistry problems with the subjects' working memory capacity. In this work, we apply the same nonlinear approach to a different data set, taken from chemical-equilibrium problem solving. In contrast to the organic-synthesis problems, these problems are algorithmic, require numerical calculations, and have a complex logical structure. As a result, these problems cause deviations from the model, and affect the pattern observed with the nonlinear method. In addition to Baddeley's working memory capacity, the Pascual-Leone's mental (M-) capacity is examined by the same random-walk method. As the complexity of the problem increases, the fractal dimension of the working memory random walk demonstrates a sudden drop, while the fractal dimension of the M-capacity random walk decreases in a linear fashion. A review of the basic features of the two capacities and their relation is included. The method and findings have consequences for problem solving not only in chemistry and science education, but also in other disciplines.

  15. Numerical simulation of colloidal self-assembly of super-hydrophobic arachnid cerotegument structures.

    PubMed

    Filippov, Alexander É; Wolff, Jonas O; Seiter, Michael; Gorb, Stanislav N

    2017-10-07

    Certain arachnids exhibit complex coatings of their exoskeleton, consisting of globular structures with complex surface features. This, so-called, cerotegument is formed by a multi-component colloidal secretion that self-assembles and cures on the body surface, and leads to high water repellency. Previous ultrastructural studies revealed the involvement of different glandular cells that contribute different components to the secretion mixture, but the overall process of self-assembly into the complex regular structures observed remained highly unclear. Here we study this process from a theoretical point of view, starting from the so-called Tammes-problem. We show that slight changes of simple parameters lead to a variety of morphologies that are highly similar to the ones observed in the species specific cerotegument structures of whip-spiders. These results are not only important for our understanding of the formation of globular hierarchical structures in nature, but also for the fabrication of novel surface coatings by colloidal lithography. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  17. How to use MPI communication in highly parallel climate simulations more easily and more efficiently.

    NASA Astrophysics Data System (ADS)

    Behrens, Jörg; Hanke, Moritz; Jahns, Thomas

    2014-05-01

    In this talk we present a way to facilitate efficient use of MPI communication for developers of climate models. Exploitation of the performance potential of today's highly parallel supercomputers with real world simulations is a complex task. This is partly caused by the low level nature of the MPI communication library which is the dominant communication tool at least for inter-node communication. In order to manage the complexity of the task, climate simulations with non-trivial communication patterns often use an internal abstraction layer above MPI without exploiting the benefits of communication aggregation or MPI-datatypes. The solution for the complexity and performance problem we propose is the communication library YAXT. This library is built on top of MPI and takes high level descriptions of arbitrary domain decompositions and automatically derives an efficient collective data exchange. Several exchanges can be aggregated in order to reduce latency costs. Examples are given which demonstrate the simplicity and the performance gains for selected climate applications.

  18. Internalizing symptoms and conduct problems: Redundant, incremental, or interactive risk factors for adolescent substance use during the first year of high school?

    PubMed

    Khoddam, Rubin; Jackson, Nicholas J; Leventhal, Adam M

    2016-12-01

    The complex interplay of externalizing and internalizing problems in substance use risk is not well understood. This study tested whether the relationship of conduct problems and several internalizing disorders with future substance use is redundant, incremental, or interactive in adolescents. Two semiannual waves of data from the Happiness and Health Study were used, which included 3383 adolescents (M age=14.1years old; 53% females) in Los Angeles who were beginning high school at baseline. Logistic regression models tested the likelihood of past six-month alcohol, tobacco, marijuana, and any substance use at follow-up conditional on baseline conduct problems, symptoms of one of several internalizing disorders (i.e., Social Phobia and Major Depressive, Generalized Anxiety, Panic, and Obsessive-Compulsive Disorder), and their interaction adjusting for baseline use and other covariates. Conduct problems were a robust and consistent risk factor of each substance use outcome at follow-up. When adjusting for the internalizing-conduct comorbidity, depressive symptoms were the only internalizing problem whose risk for alcohol, tobacco, and any substance use was incremental to conduct problems. With the exception of social phobia, antagonistic interactive relationships between each internalizing disorder and conduct problems were found when predicting any substance use; internalizing symptoms was a more robust risk factor for substance use in teens with low (vs. high) conduct problems. Although internalizing and externalizing problems both generally increase risk of substance use, a closer look reveals important nuances in these risk pathways, particularly among teens with comorbid externalizing and internalizing problems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. How students process equations in solving quantitative synthesis problems? Role of mathematical complexity in students' mathematical performance

    NASA Astrophysics Data System (ADS)

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-12-01

    We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking, formulation and combination of equations require conceptual reasoning; simplification of equations requires manipulation of equations as computational tools. Mathematical complexity is operationally defined by the number and the type of equations to be manipulated concurrently due to the number of unknowns in each equation. We use two types of synthesis problems, namely, sequential and simultaneous tasks. Sequential synthesis tasks require a chronological application of pertinent concepts, and simultaneous synthesis tasks require a concurrent application of the pertinent concepts. A total of 179 physics major students from a second year mechanics course participated in the study. Data were collected from written tasks and individual interviews. Results show that mathematical complexity negatively influences the students' mathematical performance on both types of synthesis problems. However, for the sequential synthesis tasks, it interferes only with the students' simplification of equations. For the simultaneous synthesis tasks, mathematical complexity additionally impedes the students' formulation and combination of equations. Several reasons may explain this difference, including the students' different approaches to the two types of synthesis problems, cognitive load, and the variation of mathematical complexity within each synthesis type.

  20. Dynamic emulation modelling for the optimal operation of water systems: an overview

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Giuliani, M.

    2014-12-01

    Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.

  1. Can Undergraduates Be Transdisciplinary? Promoting Transdisciplinary Engagement through Global Health Problem-Based Learning

    ERIC Educational Resources Information Center

    Hay, M. Cameron

    2017-01-01

    Undergraduate student learning focuses on the development of disciplinary strength in majors and minors so that students gain depth in particular fields, foster individual expertise, and learn problem solving from disciplinary perspectives. However, the complexities of real-world problems do not respect disciplinary boundaries. Complex problems…

  2. The Process of Solving Complex Problems

    ERIC Educational Resources Information Center

    Fischer, Andreas; Greiff, Samuel; Funke, Joachim

    2012-01-01

    This article is about Complex Problem Solving (CPS), its history in a variety of research domains (e.g., human problem solving, expertise, decision making, and intelligence), a formal definition and a process theory of CPS applicable to the interdisciplinary field. CPS is portrayed as (a) knowledge acquisition and (b) knowledge application…

  3. Communities of Practice: A New Approach to Solving Complex Educational Problems

    ERIC Educational Resources Information Center

    Cashman, J.; Linehan, P.; Rosser, M.

    2007-01-01

    Communities of Practice offer state agency personnel a promising approach for engaging stakeholder groups in collaboratively solving complex and, often, persistent problems in special education. Communities of Practice can help state agency personnel drive strategy, solve problems, promote the spread of best practices, develop members'…

  4. 6 Essential Questions for Problem Solving

    ERIC Educational Resources Information Center

    Kress, Nancy Emerson

    2017-01-01

    One of the primary expectations that the author has for her students is for them to develop greater independence when solving complex and unique mathematical problems. The story of how the author supports her students as they gain confidence and independence with complex and unique problem-solving tasks, while honoring their expectations with…

  5. Students' and Teachers' Conceptual Metaphors for Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Yee, Sean P.

    2017-01-01

    Metaphors are regularly used by mathematics teachers to relate difficult or complex concepts in classrooms. A complex topic of concern in mathematics education, and most STEM-based education classes, is problem solving. This study identified how students and teachers contextualize mathematical problem solving through their choice of metaphors.…

  6. How Students Process Equations in Solving Quantitative Synthesis Problems? Role of Mathematical Complexity in Students' Mathematical Performance

    ERIC Educational Resources Information Center

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-01-01

    We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking,…

  7. Machine Learning and Inverse Problem in Geodynamics

    NASA Astrophysics Data System (ADS)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.

    2017-12-01

    During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques can successfully predict the magnitude of the mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex problems in mantle dynamics by employing deep learning algorithms for estimation of mantle properties such as viscosity, elastic parameters, and thermal and chemical anomalies.

  8. Ab initio nanostructure determination

    NASA Astrophysics Data System (ADS)

    Gujarathi, Saurabh

    Reconstruction of complex structures is an inverse problem arising in virtually all areas of science and technology, from protein structure determination to bulk heterostructure solar cells and the structure of nanoparticles. This problem is cast as a complex network problem where the edges in a network have weights equal to the Euclidean distance between their endpoints. A method, called Tribond, for the reconstruction of the locations of the nodes of the network given only the edge weights of the Euclidean network is presented. The timing results indicate that the algorithm is a low order polynomial in the number of nodes in the network in two dimensions. Reconstruction of Euclidean networks in two dimensions of about one thousand nodes in approximately twenty four hours on a desktop computer using this implementation is done. In three dimensions, the computational cost for the reconstruction is a higher order polynomial in the number of nodes and reconstruction of small Euclidean networks in three dimensions is shown. If a starting network of size five is assumed to be given, then for a network of size 100, the remaining reconstruction can be done in about two hours on a desktop computer. In situations when we have less precise data, modifications of the method may be necessary and are discussed. A related problem in one dimension known as the Optimal Golomb ruler (OGR) is also studied. A statistical physics Hamiltonian to describe the OGR problem is introduced and the first order phase transition from a symmetric low constraint phase to a complex symmetry broken phase at high constraint is studied. Despite the fact that the Hamiltonian is not disordered, the asymmetric phase is highly irregular with geometric frustration. The phase diagram is obtained and it is seen that even at a very low temperature T there is a phase transition at finite and non-zero value of the constraint parameter gamma/mu. Analytic calculations for the scaling of the density and free energy of the ruler are done and they are compared with those from the mean field approach. A scaling law is also derived for the length of OGR, which is consistent with Erdos conjecture and with numerical results.

  9. Peculiarities of solving the problems of modern logistics in high-rise construction and industrial production

    NASA Astrophysics Data System (ADS)

    Rubtsov, Anatoliy E.; Ushakova, Elena V.; Chirkova, Tamara V.

    2018-03-01

    Basing on the analysis of the enterprise (construction organization) structure and infrastructure of the entire logistics system in which this enterprise (construction organization) operates, this article proposes an approach to solve the problems of structural optimization and a set of calculation tasks, based on customer orders as well as on the required levels of insurance stocks, transit stocks and other types of stocks in the distribution network, modes of operation of the in-company transport and storage complex and a number of other factors.

  10. An Overview of Importance Splitting for Rare Event Simulation

    ERIC Educational Resources Information Center

    Morio, Jerome; Pastel, Rudy; Le Gland, Francois

    2010-01-01

    Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…

  11. Can the Isolated-Elements Strategy Be Improved by Targeting Points of High Cognitive Load for Additional Practice?

    ERIC Educational Resources Information Center

    Ayres, Paul

    2013-01-01

    Reducing problem complexity by isolating elements has been shown to be an effective instructional strategy. Novices, in particular, benefit from learning from worked examples that contain partially interacting elements rather than worked examples that provide full interacting elements. This study investigated whether the isolating-elements…

  12. Youth Risk Assessment in Complex Agency Practice

    ERIC Educational Resources Information Center

    Groner, Mark R.; Solomon, Jean

    2007-01-01

    Advancements in the delivery of community-based services and tight utilization management of high-cost treatment options result in youths with serious behavior problems receiving intervention in lower levels of care than was true ten or fifteen years ago. This shift in where services tend to be delivered necessitates enhancement of risk assessment…

  13. Adjustment and Other Factors Related to High School Aged Students Identified as Hearing Impaired

    ERIC Educational Resources Information Center

    Milano, Charlene; Upshire, Tara; Scarazzo, Sara; Schade, Benjamin P.; Larwin, Karen H.

    2016-01-01

    Healthy social, emotional and cognitive development of deaf children depends upon complex interactions between the many individual and environmental factors associated with deafness. Deaf children and adolescents have been reported to possess greater rates of mental health problems than hearing children and adolescents. Dysfunction in one or more…

  14. Grand Challenges and Chemical Engineering Curriculum--Developments at TU Dortmund University

    ERIC Educational Resources Information Center

    Kockmann, Norbert; Lutze, Philip; Gorak, Andrzej

    2016-01-01

    Chemical processing industry is progressively focusing their research activities and product placements in the areas of Grand Challenges (or Global Megatrends) such as mobility, energy, communication, or health care and food. Innovation in all these fields requires solving high complex problems, rapid product development as well as dealing with…

  15. Military Social Work: Opportunities and Challenges for Social Work Education

    ERIC Educational Resources Information Center

    Wooten, Nikki R.

    2015-01-01

    Military social work is a specialized field of practice spanning the micro-macro continuum and requiring advanced social work knowledge and skills. The complex behavioral health problems and service needs of Iraq and Afghanistan veterans highlight the need for highly trained social work professionals who can provide militarily relevant and…

  16. Community-Based Suicide Prevention Research in Remote On-Reserve First Nations Communities

    ERIC Educational Resources Information Center

    Isaak, Corinne A.; Campeau, Mike; Katz, Laurence Y.; Enns, Murray W.; Elias, Brenda; Sareen, Jitender

    2010-01-01

    Suicide is a complex problem linked to genetic, environmental, psychological and community factors. For the Aboriginal population more specifically, loss of culture, history of traumatic events, individual, family and community factors may also play a role in suicidal behaviour. Of particular concern is the high rate of suicide among Canadian…

  17. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  18. Stone Soup Partnership: A Grassroots Model of Community Service.

    ERIC Educational Resources Information Center

    Kittredge, Robert E.

    1997-01-01

    Stone Soup Partnership is a collaboration between California State University at Fresno and its surrounding community to address serious problems in a high-crime, impoverished apartment complex near the university. The program involves students in service learning for university credit, and has expanded from a single summer youth program to a…

  19. Ready, Aim, Perform! Targeted Micro-Training for Performance Intervention

    ERIC Educational Resources Information Center

    Carpenter, Julia; Forde, Dahlia S.; Stevens, Denise R.; Flango, Vincent; Babcock, Lisa K.

    2016-01-01

    The Department of Veterans Affairs has an immediate problem at hand. Tens of thousands of employees are working in a high-stress work environment where fast-paced daily production requirements are critical. Employees are faced with a tremendous backlog of veterans' claims. Unfortunately, not only are the claims extremely complex, but there is…

  20. One Hundred Ninety-five Cases of High-voltage Electric Injury

    DTIC Science & Technology

    2005-08-01

    that level; and T4 to T5 paraplegia, secondary to fractures of T4 to T7. In 3 cases, frac- tures were not present: one case of a T11 to T12 sensory ...problems, including fractures, neurological inju- ries, ocular injuries, and complex reconstructive and re- habilitative needs, underscores the

  1. The effects of monitoring environment on problem-solving performance.

    PubMed

    Laird, Brian K; Bailey, Charles D; Hester, Kim

    2018-01-01

    While effective and efficient solving of everyday problems is important in business domains, little is known about the effects of workplace monitoring on problem-solving performance. In a laboratory experiment, we explored the monitoring environment's effects on an individual's propensity to (1) establish pattern solutions to problems, (2) recognize when pattern solutions are no longer efficient, and (3) solve complex problems. Under three work monitoring regimes-no monitoring, human monitoring, and electronic monitoring-114 participants solved puzzles for monetary rewards. Based on research related to worker autonomy and theory of social facilitation, we hypothesized that monitored (versus non-monitored) participants would (1) have more difficulty finding a pattern solution, (2) more often fail to recognize when the pattern solution is no longer efficient, and (3) solve fewer complex problems. Our results support the first two hypotheses, but in complex problem solving, an interaction was found between self-assessed ability and the monitoring environment.

  2. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    NASA Astrophysics Data System (ADS)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  3. The prediction of crystal structure by merging knowledge methods with first principles quantum mechanics

    NASA Astrophysics Data System (ADS)

    Ceder, Gerbrand

    2007-03-01

    The prediction of structure is a key problem in computational materials science that forms the platform on which rational materials design can be performed. Finding structure by traditional optimization methods on quantum mechanical energy models is not possible due to the complexity and high dimensionality of the coordinate space. An unusual, but efficient solution to this problem can be obtained by merging ideas from heuristic and ab initio methods: In the same way that scientist build empirical rules by observation of experimental trends, we have developed machine learning approaches that extract knowledge from a large set of experimental information and a database of over 15,000 first principles computations, and used these to rapidly direct accurate quantum mechanical techniques to the lowest energy crystal structure of a material. Knowledge is captured in a Bayesian probability network that relates the probability to find a particular crystal structure at a given composition to structure and energy information at other compositions. We show that this approach is highly efficient in finding the ground states of binary metallic alloys and can be easily generalized to more complex systems.

  4. Mechanism for Self-Reacted Friction Stir Welding

    NASA Technical Reports Server (NTRS)

    Venable, Richard; Bucher, Joseph

    2004-01-01

    A mechanism has been designed to apply the loads (the stirring and the resection forces and torques) in self-reacted friction stir welding. This mechanism differs somewhat from mechanisms used in conventional friction stir welding, as described below. The tooling needed to apply the large reaction loads in conventional friction stir welding can be complex. Self-reacted friction stir welding has become popular in the solid-state welding community as a means of reducing the complexity of tooling and to reduce costs. The main problems inherent in self-reacted friction stir welding originate in the high stresses encountered by the pin-and-shoulder assembly that produces the weld. The design of the present mechanism solves the problems. The mechanism includes a redesigned pin-and-shoulder assembly. The welding torque is transmitted into the welding pin by a square pin that fits into a square bushing with set-screws. The opposite or back shoulder is held in place by a Woodruff key and high-strength nut on a threaded shaft. The Woodruff key reacts the torque, while the nut reacts the tensile load on the shaft.

  5. Teaching Problem Solving; the Effect of Algorithmic and Heuristic Problem Solving Training in Relation to Task Complexity and Relevant Aptitudes.

    ERIC Educational Resources Information Center

    de Leeuw, L.

    Sixty-four fifth and sixth-grade pupils were taught number series extrapolation by either an algorithm, fully prescribed problem-solving method or a heuristic, less prescribed method. The trained problems were within categories of two degrees of complexity. There were 16 subjects in each cell of the 2 by 2 design used. Aptitude Treatment…

  6. Role of temperament in early adolescent pure and co-occurring internalizing and externalizing problems using a bifactor model: Moderation by parenting and gender

    PubMed Central

    WANG, FRANCES L.; EISENBERG, NANCY; VALIENTE, CARLOS; SPINRAD, TRACY L.

    2015-01-01

    We contribute to the literature on the relations of temperament to externalizing and internalizing problems by considering parental emotional expressivity and child gender as moderators of such relations and examining prediction of pure and co-occurring problem behaviors during early to middle adolescence using bifactor models (which provide unique and continuous factors for pure and co-occurring internalizing and externalizing problems). Parents and teachers reported on children’s (4.5- to 8-year-olds; N = 214) and early adolescents’ (6 years later; N = 168) effortful control, impulsivity, anger, sadness, and problem behaviors. Parental emotional expressivity was measured observationally and with parents’ self-reports. Early-adolescents’ pure externalizing and co-occurring problems shared childhood and/or early-adolescent risk factors of low effortful control, high impulsivity, and high anger. Lower childhood and early-adolescent impulsivity and higher early-adolescent sadness predicted early-adolescents’ pure internalizing. Childhood positive parental emotional expressivity more consistently related to early-adolescents’ lower pure externalizing compared to co-occurring problems and pure internalizing. Lower effortful control predicted changes in externalizing (pure and co-occurring) over 6 years, but only when parental positive expressivity was low. Higher impulsivity predicted co-occurring problems only for boys. Findings highlight the probable complex developmental pathways to adolescent pure and co-occurring externalizing and internalizing problems. PMID:26646352

  7. Role of temperament in early adolescent pure and co-occurring internalizing and externalizing problems using a bifactor model: Moderation by parenting and gender.

    PubMed

    Wang, Frances L; Eisenberg, Nancy; Valiente, Carlos; Spinrad, Tracy L

    2016-11-01

    We contribute to the literature on the relations of temperament to externalizing and internalizing problems by considering parental emotional expressivity and child gender as moderators of such relations and examining prediction of pure and co-occurring problem behaviors during early to middle adolescence using bifactor models (which provide unique and continuous factors for pure and co-occurring internalizing and externalizing problems). Parents and teachers reported on children's (4.5- to 8-year-olds; N = 214) and early adolescents' (6 years later; N = 168) effortful control, impulsivity, anger, sadness, and problem behaviors. Parental emotional expressivity was measured observationally and with parents' self-reports. Early-adolescents' pure externalizing and co-occurring problems shared childhood and/or early-adolescent risk factors of low effortful control, high impulsivity, and high anger. Lower childhood and early-adolescent impulsivity and higher early-adolescent sadness predicted early-adolescents' pure internalizing. Childhood positive parental emotional expressivity more consistently related to early-adolescents' lower pure externalizing compared to co-occurring problems and pure internalizing. Lower effortful control predicted changes in externalizing (pure and co-occurring) over 6 years, but only when parental positive expressivity was low. Higher impulsivity predicted co-occurring problems only for boys. Findings highlight the probable complex developmental pathways to adolescent pure and co-occurring externalizing and internalizing problems.

  8. Rapid Preliminary Design of Interplanetary Trajectories Using the Evolutionary Mission Trajectory Generator

    NASA Technical Reports Server (NTRS)

    Englander, Jacob

    2016-01-01

    Preliminary design of interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed, and in some cases the final destination. In addition, a time-history of control variables must be chosen that defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the mission design problem as a hybrid optimal control problem. The method is demonstrated on notional high-thrust chemical and low-thrust electric propulsion missions. In the low-thrust case, the hybrid optimal control problem is augmented to include systems design optimization.

  9. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    PubMed

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  10. The prevalence of foot problems in older women: a cause for concern.

    PubMed

    Dawson, Jill; Thorogood, Margaret; Marks, Sally-Anne; Juszczak, Ed; Dodd, Chris; Lavis, Grahame; Fitzpatrick, Ray

    2002-06-01

    Painful feet are an extremely common problem amongst older women. Such problems increase the risk of falls and hamper mobility. The aetiology of painful and deformed feet is poorly understood. Data were obtained during a pilot case-control study about past high heel usage in women, in relation to osteoarthritis of the knee. A total of 127 women aged 50-70 were interviewed (31 cases, 96 controls); case-control sets were matched for age. The following information was obtained about footwear: (1) age when first wore shoes with heels 1, 2 and 3 inches high; (2) height of heels worn for work; (3) maximum height of heels worn regularly for work, going out socially and for dancing, in 10-year age bands. Information about work-related activities and lifetime occupational history was gathered using a Life-Grid. The interview included a foot inspection. Foot problems, particularly foot arthritis, affected considerably more cases than controls (45 per cent versus 16 per cent, p = 0.001) and was considered a confounder. Cases were therefore excluded from subsequent analyses. Amongst controls, the prevalence of any foot problems was very high (83 per cent). All women had regularly worn one inch heels and few (8 per cent) had never worn 2 inch heels. Foot problems were significantly associated with a history of wearing relatively lower heels. Few work activities were related to foot problems; regular lifting was associated with foot pain (p = 0.03). Most women in this age-group have been exposed to high-heeled shoes over many years, making aetiological research difficult in this area. Foot pain and deformities are widespread. The relationship between footwear, occupational activities and foot problems is a complex one that deserves considerably more research.

  11. Multi-level systems modeling and optimization for novel aircraft

    NASA Astrophysics Data System (ADS)

    Subramanian, Shreyas Vathul

    This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission best achieved via a large collection of interacting simple systems, or a relatively few highly capable, complex air vehicles). The vastly unexplored area of optimization in evolving design spaces will be studied and incorporated into the SoS optimization framework. We envision a framework that resembles a multi-level, mult-fidelity, multi-disciplinary assemblage of optimization problems. The challenge is not simply one of scaling up to a new level (the SoS), but recognizing that the aircraft sub-systems and the integrated vehicle are now intensely cyber-physical, with hardware and software components interacting in complex ways that give rise to new and improved capabilities. The work presented here is a step closer to modeling the information flow that exists in realistic SoS optimization problems between sub-contractors, contractors and the SoS architect.

  12. The emerging problem of physical child abuse in South Korea.

    PubMed

    Hahm, H C; Guterman, N B

    2001-05-01

    South Korea has had remarkably high incidence and prevalence rates of physical violence against children, yet the problem has received only limited public and professional attention until very recently. This article represents the first attempt in English to systematically analyze South Korea's recent epidemiological studies on child maltreatment. Discussed are sociocultural factors that have contributed both to delays in child protection laws and a low public awareness of the problem of child abuse. The article highlights methodological issues concerning the definition of physical abuse in South Korea and the complex attitudes toward violence. It also examines the role of the Korean women's movement in the reform of family laws and the recent establishment of new child protection legislation. Suggestions for future directions for the problem of child maltreatment within South Korea are presented.

  13. Overset meshing coupled with hybridizable discontinuous Galerkin finite elements

    DOE PAGES

    Kauffman, Justin A.; Sheldon, Jason P.; Miller, Scott T.

    2017-03-01

    We introduce the use of hybridizable discontinuous Galerkin (HDG) finite element methods on overlapping (overset) meshes. Overset mesh methods are advantageous for solving problems on complex geometrical domains. We also combine geometric flexibility of overset methods with the advantages of HDG methods: arbitrarily high-order accuracy, reduced size of the global discrete problem, and the ability to solve elliptic, parabolic, and/or hyperbolic problems with a unified form of discretization. This approach to developing the ‘overset HDG’ method is to couple the global solution from one mesh to the local solution on the overset mesh. We present numerical examples for steady convection–diffusionmore » and static elasticity problems. The examples demonstrate optimal order convergence in all primal fields for an arbitrary amount of overlap of the underlying meshes.« less

  14. An experimental investigation of the impingement of a planar shock wave on an axisymmetric body at Mach 3

    NASA Technical Reports Server (NTRS)

    Brosh, A.; Kussoy, M. I.

    1983-01-01

    An experimental study of the flow caused by a planar shock wave impinging obliquely on a cylinder is presented. The complex three dimensional shock wave and boundary layer interaction occurring in practical problems, such as the shock wave impingement from the shuttle nose on an external fuel tank, and store carriage interference on a supersonic tactical aircraft were investigated. A data base for numerical computations of complex flows was also investigated. The experimental techniques included pressure measurements and oil flow patterns on the surface of the cylinder, and shadowgraphs and total and static pressure surveys on the leeward and windward planes of symmetry. The complete data is presented in tabular form. The results reveal a highly complex flow field with two separation zones, regions of high crossflow, and multiple reflected shocks and expansion fans.

  15. Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements

    NASA Astrophysics Data System (ADS)

    Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.

    1999-11-01

    A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.

  16. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  17. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  18. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  19. Clarification process: Resolution of decision-problem conditions

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A model of a general process which occurs in both decisionmaking and problem-solving tasks is presented. It is called the clarification model and is highly dependent on information flow. The model addresses the possible constraints of individual indifferences and experience in achieving success in resolving decision-problem conditions. As indicated, the application of the clarification process model is only necessary for certain classes of the basic decision-problem condition. With less complex decision problem conditions, certain phases of the model may be omitted. The model may be applied across a wide range of decision problem conditions. The model consists of two major components: (1) the five-phase prescriptive sequence (based on previous approaches to both concepts) and (2) the information manipulation function (which draws upon current ideas in the areas of information processing, computer programming, memory, and thinking). The two components are linked together to provide a structure that assists in understanding the process of resolving problems and making decisions.

  20. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  1. Lessons Learned from Crowdsourcing Complex Engineering Tasks.

    PubMed

    Staffelbach, Matthew; Sempolinski, Peter; Kijewski-Correa, Tracy; Thain, Douglas; Wei, Daniel; Kareem, Ahsan; Madey, Gregory

    2015-01-01

    Crowdsourcing is the practice of obtaining needed ideas, services, or content by requesting contributions from a large group of people. Amazon Mechanical Turk is a web marketplace for crowdsourcing microtasks, such as answering surveys and image tagging. We explored the limits of crowdsourcing by using Mechanical Turk for a more complicated task: analysis and creation of wind simulations. Our investigation examined the feasibility of using crowdsourcing for complex, highly technical tasks. This was done to determine if the benefits of crowdsourcing could be harnessed to accurately and effectively contribute to solving complex real world engineering problems. Of course, untrained crowds cannot be used as a mere substitute for trained expertise. Rather, we sought to understand how crowd workers can be used as a large pool of labor for a preliminary analysis of complex data. We compared the skill of the anonymous crowd workers from Amazon Mechanical Turk with that of civil engineering graduate students, making a first pass at analyzing wind simulation data. For the first phase, we posted analysis questions to Amazon crowd workers and to two groups of civil engineering graduate students. A second phase of our experiment instructed crowd workers and students to create simulations on our Virtual Wind Tunnel website to solve a more complex task. With a sufficiently comprehensive tutorial and compensation similar to typical crowd-sourcing wages, we were able to enlist crowd workers to effectively complete longer, more complex tasks with competence comparable to that of graduate students with more comprehensive, expert-level knowledge. Furthermore, more complex tasks require increased communication with the workers. As tasks become more complex, the employment relationship begins to become more akin to outsourcing than crowdsourcing. Through this investigation, we were able to stretch and explore the limits of crowdsourcing as a tool for solving complex problems.

  2. Risk Management in Complex Construction Projects that Apply Renewable Energy Sources: A Case Study of the Realization Phase of the Energis Educational and Research Intelligent Building

    NASA Astrophysics Data System (ADS)

    Krechowicz, Maria

    2017-10-01

    Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.

  3. Identifying problems and generating recommendations for enhancing complex systems: applying the abstraction hierarchy framework as an analytical tool.

    PubMed

    Xu, Wei

    2007-12-01

    This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.

  4. Transformations of software design and code may lead to reduced errors

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.

    1983-01-01

    The capability of programmers and non-programmers to specify problem solutions by developing example-solutions and also for the programmers by writing computer programs was investigated; each method of specification was accomplished at various levels of problem complexity. The level of difficulty of each problem was reflected by the number of steps needed by the user to develop a solution. Machine processing of the user inputs permitted inferences to be developed about the algorithms required to solve a particular problem. The interactive feedback of processing results led users to a more precise definition of the desired solution. Two participant groups (programmers and bookkeepers/accountants) working with three levels of problem complexity and three levels of processor complexity were used. The experimental task employed required specification of a logic for solution of a Navy task force problem.

  5. How can we improve problem solving in undergraduate biology? Applying lessons from 30 years of physics education research.

    PubMed

    Hoskinson, A-M; Caballero, M D; Knight, J K

    2013-06-01

    If students are to successfully grapple with authentic, complex biological problems as scientists and citizens, they need practice solving such problems during their undergraduate years. Physics education researchers have investigated student problem solving for the past three decades. Although physics and biology problems differ in structure and content, the instructional purposes align closely: explaining patterns and processes in the natural world and making predictions about physical and biological systems. In this paper, we discuss how research-supported approaches developed by physics education researchers can be adopted by biologists to enhance student problem-solving skills. First, we compare the problems that biology students are typically asked to solve with authentic, complex problems. We then describe the development of research-validated physics curricula emphasizing process skills in problem solving. We show that solving authentic, complex biology problems requires many of the same skills that practicing physicists and biologists use in representing problems, seeking relationships, making predictions, and verifying or checking solutions. We assert that acquiring these skills can help biology students become competent problem solvers. Finally, we propose how biology scholars can apply lessons from physics education in their classrooms and inspire new studies in biology education research.

  6. Surface similarity-based molecular query-retrieval

    PubMed Central

    Singh, Rahul

    2007-01-01

    Background Discerning the similarity between molecules is a challenging problem in drug discovery as well as in molecular biology. The importance of this problem is due to the fact that the biochemical characteristics of a molecule are closely related to its structure. Therefore molecular similarity is a key notion in investigations targeting exploration of molecular structural space, query-retrieval in molecular databases, and structure-activity modelling. Determining molecular similarity is related to the choice of molecular representation. Currently, representations with high descriptive power and physical relevance like 3D surface-based descriptors are available. Information from such representations is both surface-based and volumetric. However, most techniques for determining molecular similarity tend to focus on idealized 2D graph-based descriptors due to the complexity that accompanies reasoning with more elaborate representations. Results This paper addresses the problem of determining similarity when molecules are described using complex surface-based representations. It proposes an intrinsic, spherical representation that systematically maps points on a molecular surface to points on a standard coordinate system (a sphere). Molecular surface properties such as shape, field strengths, and effects due to field super-positioningcan then be captured as distributions on the surface of the sphere. Surface-based molecular similarity is subsequently determined by computing the similarity of the surface-property distributions using a novel formulation of histogram-intersection. The similarity formulation is not only sensitive to the 3D distribution of the surface properties, but is also highly efficient to compute. Conclusion The proposed method obviates the computationally expensive step of molecular pose-optimisation, can incorporate conformational variations, and facilitates highly efficient determination of similarity by directly comparing molecular surfaces and surface-based properties. Retrieval performance, applications in structure-activity modeling of complex biological properties, and comparisons with existing research and commercial methods demonstrate the validity and effectiveness of the approach. PMID:17634096

  7. High field hyperpolarization-EXSY experiment for fast determination of dissociation rates in SABRE complexes.

    PubMed

    Hermkens, Niels K J; Feiters, Martin C; Rutjes, Floris P J T; Wijmenga, Sybren S; Tessari, Marco

    2017-03-01

    SABRE (Signal Amplification By Reversible Exchange) is a nuclear spin hyperpolarization technique based on the reversible concurrent binding of small molecules and para-hydrogen (p-H 2 ) to an iridium metal complex in solution. At low magnetic field, spontaneous conversion of p-H 2 spin order to enhanced longitudinal magnetization of the nuclear spins of the other ligands occurs. Subsequent complex dissociation results in hyperpolarized substrate molecules in solution. The lifetime of this complex plays a crucial role in attained SABRE NMR signal enhancements. Depending on the ligands, vastly different dissociation rates have been previously measured using EXSY or selective inversion experiments. However, both these approaches are generally time-consuming due to the long recycle delays (up to 2min) necessary to reach thermal equilibrium for the nuclear spins of interest. In the cases of dilute solutions, signal averaging aggravates the problem, further extending the experimental time. Here, a new approach is proposed based on coherent hyperpolarization transfer to substrate protons in asymmetric complexes at high magnetic field. We have previously shown that such asymmetric complexes are important for application of SABRE to dilute substrates. Our results demonstrate that a series of high sensitivity EXSY spectra can be collected in a short experimental time thanks to the NMR signal enhancement and much shorter recycle delay. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Analogy as a strategy for supporting complex problem solving under uncertainty.

    PubMed

    Chan, Joel; Paletz, Susannah B F; Schunn, Christian D

    2012-11-01

    Complex problem solving in naturalistic environments is fraught with uncertainty, which has significant impacts on problem-solving behavior. Thus, theories of human problem solving should include accounts of the cognitive strategies people bring to bear to deal with uncertainty during problem solving. In this article, we present evidence that analogy is one such strategy. Using statistical analyses of the temporal dynamics between analogy and expressed uncertainty in the naturalistic problem-solving conversations among scientists on the Mars Rover Mission, we show that spikes in expressed uncertainty reliably predict analogy use (Study 1) and that expressed uncertainty reduces to baseline levels following analogy use (Study 2). In addition, in Study 3, we show with qualitative analyses that this relationship between uncertainty and analogy is not due to miscommunication-related uncertainty but, rather, is primarily concentrated on substantive problem-solving issues. Finally, we discuss a hypothesis about how analogy might serve as an uncertainty reduction strategy in naturalistic complex problem solving.

  9. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    NASA Astrophysics Data System (ADS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU-GPU collaborative simulations that solve realistic CFD problems with both complex configurations and high-order schemes.

  10. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn; Deng, Xiaogang; Zhang, Lilun

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations formore » high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations that solve realistic CFD problems with both complex configurations and high-order schemes.« less

  11. RNA motif search with data-driven element ordering.

    PubMed

    Rampášek, Ladislav; Jimenez, Randi M; Lupták, Andrej; Vinař, Tomáš; Brejová, Broňa

    2016-05-18

    In this paper, we study the problem of RNA motif search in long genomic sequences. This approach uses a combination of sequence and structure constraints to uncover new distant homologs of known functional RNAs. The problem is NP-hard and is traditionally solved by backtracking algorithms. We have designed a new algorithm for RNA motif search and implemented a new motif search tool RNArobo. The tool enhances the RNAbob descriptor language, allowing insertions in helices, which enables better characterization of ribozymes and aptamers. A typical RNA motif consists of multiple elements and the running time of the algorithm is highly dependent on their ordering. By approaching the element ordering problem in a principled way, we demonstrate more than 100-fold speedup of the search for complex motifs compared to previously published tools. We have developed a new method for RNA motif search that allows for a significant speedup of the search of complex motifs that include pseudoknots. Such speed improvements are crucial at a time when the rate of DNA sequencing outpaces growth in computing. RNArobo is available at http://compbio.fmph.uniba.sk/rnarobo .

  12. Reasoning about Resources and Hierarchical Tasks Using OWL and SWRL

    NASA Astrophysics Data System (ADS)

    Elenius, Daniel; Martin, David; Ford, Reginald; Denker, Grit

    Military training and testing events are highly complex affairs, potentially involving dozens of legacy systems that need to interoperate in a meaningful way. There are superficial interoperability concerns (such as two systems not sharing the same messaging formats), but also substantive problems such as different systems not sharing the same understanding of the terrain, positions of entities, and so forth. We describe our approach to facilitating such events: describe the systems and requirements in great detail using ontologies, and use automated reasoning to automatically find and help resolve problems. The complexity of our problem took us to the limits of what one can do with OWL, and we needed to introduce some innovative techniques of using and extending it. We describe our novel ways of using SWRL and discuss its limitations as well as extensions to it that we found necessary or desirable. Another innovation is our representation of hierarchical tasks in OWL, and an engine that reasons about them. Our task ontology has proved to be a very flexible and expressive framework to describe requirements on resources and their capabilities in order to achieve some purpose.

  13. Numerical propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Lytle, John K.; Remaklus, David A.; Nichols, Lester D.

    1990-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.

  14. Robonaut 2 and You: Specifying and Executing Complex Operations

    NASA Technical Reports Server (NTRS)

    Baker, William; Kingston, Zachary; Moll, Mark; Badger, Julia; Kavraki, Lydia

    2017-01-01

    Crew time is a precious resource due to the expense of trained human operators in space. Efficient caretaker robots could lessen the manual labor load required by frequent vehicular and life support maintenance tasks, freeing astronaut time for scientific mission objectives. Humanoid robots can fluidly exist alongside human counterparts due to their form, but they are complex and high-dimensional platforms. This paper describes a system that human operators can use to maneuver Robonaut 2 (R2), a dexterous humanoid robot developed by NASA to research co-robotic applications. The system includes a specification of constraints used to describe operations, and the supporting planning framework that solves constrained problems on R2 at interactive speeds. The paper is developed in reference to an illustrative, typical example of an operation R2 performs to highlight the challenges inherent to the problems R2 must face. Finally, the interface and planner is validated through a case-study using the guiding example on the physical robot in a simulated microgravity environment. This work reveals the complexity of employing humanoid caretaker robots and suggest solutions that are broadly applicable.

  15. Robust pattern decoding in shape-coded structured light

    NASA Astrophysics Data System (ADS)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  16. Predictive model of complexity in early palliative care: a cohort of advanced cancer patients (PALCOM study).

    PubMed

    Tuca, Albert; Gómez-Martínez, Mónica; Prat, Aleix

    2018-01-01

    Model of early palliative care (PC) integrated in oncology is based on shared care from the diagnosis to the end of life and is mainly focused on patients with greater complexity. However, there is no definition or tools to evaluate PC complexity. The objectives of the study were to identify the factors influencing level determination of complexity, propose predictive models, and build a complexity scale of PC. We performed a prospective, observational, multicenter study in a cohort of advanced cancer patients with an estimated prognosis ≤ 6 months. An ad hoc structured evaluation including socio-demographic and clinical data, symptom burden, functional and cognitive status, psychosocial problems, and existential-ethic dilemmas was recorded systematically. According to this multidimensional evaluation, investigator classified patients as high, medium, or low palliative complexity, associated to need of basic or specialized PC. Logistic regression was used to identify the variables influencing determination of level of PC complexity and explore predictive models. We included 324 patients; 41% were classified as having high PC complexity and 42.9% as medium, both levels being associated with specialized PC. Variables influencing determination of PC complexity were as follows: high symptom burden (OR 3.19 95%CI: 1.72-6.17), difficult pain (OR 2.81 95%CI:1.64-4.9), functional status (OR 0.99 95%CI:0.98-0.9), and social-ethical existential risk factors (OR 3.11 95%CI:1.73-5.77). Logistic analysis of variables allowed construct a complexity model and structured scales (PALCOM 1 and 2) with high predictive value (AUC ROC 76%). This study provides a new model and tools to assess complexity in palliative care, which may be very useful to manage referral to specialized PC services, and agree intensity of their intervention in a model of early-shared care integrated in oncology.

  17. Solving complex band structure problems with the FEAST eigenvalue algorithm

    NASA Astrophysics Data System (ADS)

    Laux, S. E.

    2012-08-01

    With straightforward extension, the FEAST eigenvalue algorithm [Polizzi, Phys. Rev. B 79, 115112 (2009)] is capable of solving the generalized eigenvalue problems representing traveling-wave problems—as exemplified by the complex band-structure problem—even though the matrices involved are complex, non-Hermitian, and singular, and hence outside the originally stated range of applicability of the algorithm. The obtained eigenvalues/eigenvectors, however, contain spurious solutions which must be detected and removed. The efficiency and parallel structure of the original algorithm are unaltered. The complex band structures of Si layers of varying thicknesses and InAs nanowires of varying radii are computed as test problems.

  18. Integrated research in natural resources: the key role of problem framing.

    Treesearch

    Roger N. Clark; George H. Stankey

    2006-01-01

    Integrated research is about achieving holistic understanding of complex biophysical and social issues and problems. It is driven by the need to improve understanding about such systems and to improve resource management by using the results of integrated research processes.Traditional research tends to fragment complex problems, focusing more on the pieces...

  19. An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups

    ERIC Educational Resources Information Center

    Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi

    2012-01-01

    The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…

  20. The solution of the optimization problem of small energy complexes using linear programming methods

    NASA Astrophysics Data System (ADS)

    Ivanin, O. A.; Director, L. B.

    2016-11-01

    Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.

  1. Optimal placement of multiple types of communicating sensors with availability and coverage redundancy constraints

    NASA Astrophysics Data System (ADS)

    Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.

    2010-04-01

    Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.

  2. Cynefin as Reference Framework to Facilitate Insight and Decision-Making in Complex Contexts of Biomedical Research.

    PubMed

    Kempermann, Gerd

    2017-01-01

    The Cynefin scheme is a concept of knowledge management, originally devised to support decision making in management, but more generally applicable to situations, in which complexity challenges the quality of insight, prediction, and decision. Despite the fact that life itself, and especially the brain and its diseases, are complex to the extent that complexity could be considered their cardinal feature, complex problems in biomedicine are often treated as if they were actually not more than the complicated sum of solvable sub-problems. Because of the emergent properties of complex contexts this is not correct. With a set of clear criteria Cynefin helps to set apart complex problems from "simple/obvious," "complicated," "chaotic," and "disordered" contexts in order to avoid misinterpreting the relevant causality structures. The distinction comes with the insight, which specific kind of knowledge is possible in each of these categories and what are the consequences for resulting decisions and actions. From student's theses over the publication and grant writing process to research politics, misinterpretation of complexity can have problematic or even dangerous consequences, especially in clinical contexts. Conceptualization of problems within a straightforward reference language like Cynefin improves clarity and stringency within projects and facilitates communication and decision-making about them.

  3. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  4. Antheraea pernyi silk fibroin for targeted gene delivery of VEGF165-Ang-1 with PEI.

    PubMed

    Ma, Caili; Lv, Linlin; Liu, Yu; Yu, Yanni; You, Renchuan; Yang, Jicheng; Li, Mingzhong

    2014-06-01

    Vascularization is a crucial challenge in tissue engineering. One solution for this problem is to implant scaffolds that contain functional genes that promote vascularization by providing angiogenic growth factors via a gene delivery carrier. Poly(ethylenimine) (PEI) is a gene delivery carrier with high transfection efficiency but with cytotoxicity. To solve this problem, we utilized Antheraea pernyi silk fibroin (ASF), which has favorable cytocompatibility and biodegradability, RGD sequences and a negative charge, in conjunction with PEI, as the delivery vector for vascular endothelial growth factor (VEGF) 165-angiopoietin-1 (Ang-1) dual gene simultaneous expression plasmid, creating an ASF/PEI/pDNA complex. The results suggested that the zeta potential of the ASF/PEI/pDNA complex was significantly lower than that of the PEI/pDNA complex. Decreased nitrogen and increased oxygen on the surface of the complex demonstrated that the ASF had successfully combined with the surface of the PEI/pDNA. Furthermore, the complexes resisted digestion by nucleic acid enzymes and degradation by serum. L929 cells were cultured and transfected in vitro and improved cytotoxicity was found when the cells were transfected with ASF/PEI/pDNA compared with PEI/pDNA. In addition, the transfection efficiency and VEGF secretion increased. In general, this study provides a novel method for decreasing the cytotoxicity of PEI gene delivery vectors and increasing transfection efficiency of angiogenesis-related genes.

  5. PM2006: a highly scalable urban planning management information system--Case study: Suzhou Urban Planning Bureau

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie

    2008-10-01

    During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.

  6. Strategic optimisation of microgrid by evolving a unitised regenerative fuel cell system operational criterion

    NASA Astrophysics Data System (ADS)

    Bhansali, Gaurav; Singh, Bhanu Pratap; Kumar, Rajesh

    2016-09-01

    In this paper, the problem of microgrid optimisation with storage has been addressed in an unaccounted way rather than confining it to loss minimisation. Unitised regenerative fuel cell (URFC) systems have been studied and employed in microgrids to store energy and feed it back into the system when required. A value function-dependent on line losses, URFC system operational cost and stored energy at the end of the day are defined here. The function is highly complex, nonlinear and multi dimensional in nature. Therefore, heuristic optimisation techniques in combination with load flow analysis are used here to resolve the network and time domain complexity related with the problem. Particle swarm optimisation with the forward/backward sweep algorithm ensures optimal operation of microgrid thereby minimising the operational cost of the microgrid. Results are shown and are found to be consistently improving with evolution of the solution strategy.

  7. Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris

    2012-01-01

    A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.

  8. Brain Dynamics: Methodological Issues and Applications in Psychiatric and Neurologic Diseases

    NASA Astrophysics Data System (ADS)

    Pezard, Laurent

    The human brain is a complex dynamical system generating the EEG signal. Numerical methods developed to study complex physical dynamics have been used to characterize EEG since the mid-eighties. This endeavor raised several issues related to the specificity of EEG. Firstly, theoretical and methodological studies should address the major differences between the dynamics of the human brain and physical systems. Secondly, this approach of EEG signal should prove to be relevant for dealing with physiological or clinical problems. A set of studies performed in our group is presented here within the context of these two problematic aspects. After the discussion of methodological drawbacks, we review numerical simulations related to the high dimension and spatial extension of brain dynamics. Experimental studies in neurologic and psychiatric disease are then presented. We conclude that if it is now clear that brain dynamics changes in relation with clinical situations, methodological problems remain largely unsolved.

  9. Influence maximization based on partial network structure information: A comparative analysis on seed selection heuristics

    NASA Astrophysics Data System (ADS)

    Erkol, Şirag; Yücel, Gönenç

    In this study, the problem of seed selection is investigated. This problem is mainly treated as an optimization problem, which is proved to be NP-hard. There are several heuristic approaches in the literature which mostly use algorithmic heuristics. These approaches mainly focus on the trade-off between computational complexity and accuracy. Although the accuracy of algorithmic heuristics are high, they also have high computational complexity. Furthermore, in the literature, it is generally assumed that complete information on the structure and features of a network is available, which is not the case in most of the times. For the study, a simulation model is constructed, which is capable of creating networks, performing seed selection heuristics, and simulating diffusion models. Novel metric-based seed selection heuristics that rely only on partial information are proposed and tested using the simulation model. These heuristics use local information available from nodes in the synthetically created networks. The performances of heuristics are comparatively analyzed on three different network types. The results clearly show that the performance of a heuristic depends on the structure of a network. A heuristic to be used should be selected after investigating the properties of the network at hand. More importantly, the approach of partial information provided promising results. In certain cases, selection heuristics that rely only on partial network information perform very close to similar heuristics that require complete network data.

  10. Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Zhong; Lai, Ying-Cheng

    2018-03-01

    Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.

  11. Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics.

    PubMed

    Chen, Yu-Zhong; Lai, Ying-Cheng

    2018-03-01

    Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.

  12. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  13. Non-Residential Father-Child Involvement, Interparental Conflict and Mental Health of Children Following Divorce: A Person-Focused Approach

    PubMed Central

    Elam, Kit K.; Sandler, Irwin; Wolchik, Sharlene; Tein, Jenn-Yun

    2015-01-01

    Variable-centered research has found complex relationships between child well-being and two critical aspects of the post-divorce family environment: the level of non-residential father involvement (i.e., contact and supportive relationship) with their children and the level of conflict between the father and mother. However, these analyses fail to capture individual differences based on distinct patterns of interparental conflict, father support and father contact. Using a person-centered latent profile analysis, the present study examined (1) profiles of non-residential father contact, support, and interparental conflict in the two years following divorce (N = 240), when children (49% female) were between 9 and 12 years of age and (2) differences across profiles in concurrent child adjustment outcomes as well as outcomes six years later. Four profiles of father involvement were identified: High Contact – Moderate Conflict – Moderate Support, Low Contact – Moderate Conflict – Low Support, High Conflict – Moderate Contact –Moderate Support, and Low Conflict – Moderate Contact – Moderate Support. Concurrently, children with fathers in the group with high conflict were found to have significantly greater internalizing and externalizing problems compared to all other groups. Six years later, children with fathers in the group with low contact and low support were found to have greater internalizing and externalizing problems compared to children with fathers in the high conflict group, and also greater internalizing problems compared to children with fathers in the low conflict group. These results provide insight into the complex relationship among non-residential fathers’ conflict, contact, and support in child adjustment within divorcing families. PMID:26692236

  14. Multiscale high-order/low-order (HOLO) algorithms and applications

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.

    2017-02-01

    We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.

  15. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  16. RIACS

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1997-01-01

    Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.

  17. Super resolution reconstruction of infrared images based on classified dictionary learning

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Han, Pingli; Wang, Yi; Li, Xuan; Bai, Lu; Shao, Xiaopeng

    2018-05-01

    Infrared images always suffer from low-resolution problems resulting from limitations of imaging devices. An economical approach to combat this problem involves reconstructing high-resolution images by reasonable methods without updating devices. Inspired by compressed sensing theory, this study presents and demonstrates a Classified Dictionary Learning method to reconstruct high-resolution infrared images. It classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster. The optimal pair of dictionaries is chosen for each image reconstruction and therefore, more satisfactory results is achieved without the increase in computational complexity and time cost. Experiments and results demonstrated that it is a viable method for infrared images reconstruction since it improves image resolution and recovers detailed information of targets.

  18. A convolutional neural network neutrino event classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, A.; Radovic, A.; Rocco, D.

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  19. Assessment of Hybrid RANS/LES Turbulence Models for Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.

    2010-01-01

    Predicting the noise from aircraft with exposed landing gear remains a challenging problem for the aeroacoustics community. Although computational fluid dynamics (CFD) has shown promise as a technique that could produce high-fidelity flow solutions, generating grids that can resolve the pertinent physics around complex configurations can be very challenging. Structured grids are often impractical for such configurations. Unstructured grids offer a path forward for simulating complex configurations. However, few unstructured grid codes have been thoroughly tested for unsteady flow problems in the manner needed for aeroacoustic prediction. A widely used unstructured grid code, FUN3D, is examined for resolving the near field in unsteady flow problems. Although the ultimate goal is to compute the flow around complex geometries such as the landing gear, simpler problems that include some of the relevant physics, and are easily amenable to the structured grid approaches are used for testing the unstructured grid approach. The test cases chosen for this study correspond to the experimental work on single and tandem cylinders conducted in the Basic Aerodynamic Research Tunnel (BART) and the Quiet Flow Facility (QFF) at NASA Langley Research Center. These configurations offer an excellent opportunity to assess the performance of hybrid RANS/LES turbulence models that transition from RANS in unresolved regions near solid bodies to LES in the outer flow field. Several of these models have been implemented and tested in both structured and unstructured grid codes to evaluate their dependence on the solver and mesh type. Comparison of FUN3D solutions with experimental data and numerical solutions from a structured grid flow solver are found to be encouraging.

  20. Genome-wide detection of intervals of genetic heterogeneity associated with complex traits

    PubMed Central

    Llinares-López, Felipe; Grimm, Dominik G.; Bodenham, Dean A.; Gieraths, Udo; Sugiyama, Mahito; Rowan, Beth; Borgwardt, Karsten

    2015-01-01

    Motivation: Genetic heterogeneity, the fact that several sequence variants give rise to the same phenotype, is a phenomenon that is of the utmost interest in the analysis of complex phenotypes. Current approaches for finding regions in the genome that exhibit genetic heterogeneity suffer from at least one of two shortcomings: (i) they require the definition of an exact interval in the genome that is to be tested for genetic heterogeneity, potentially missing intervals of high relevance, or (ii) they suffer from an enormous multiple hypothesis testing problem due to the large number of potential candidate intervals being tested, which results in either many false positives or a lack of power to detect true intervals. Results: Here, we present an approach that overcomes both problems: it allows one to automatically find all contiguous sequences of single nucleotide polymorphisms in the genome that are jointly associated with the phenotype. It also solves both the inherent computational efficiency problem and the statistical problem of multiple hypothesis testing, which are both caused by the huge number of candidate intervals. We demonstrate on Arabidopsis thaliana genome-wide association study data that our approach can discover regions that exhibit genetic heterogeneity and would be missed by single-locus mapping. Conclusions: Our novel approach can contribute to the genome-wide discovery of intervals that are involved in the genetic heterogeneity underlying complex phenotypes. Availability and implementation: The code can be obtained at: http://www.bsse.ethz.ch/mlcb/research/bioinformatics-and-computational-biology/sis.html. Contact: felipe.llinares@bsse.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072488

  1. Socioeconomic status and alcohol-related behaviors in mid- to late adolescence in the Avon Longitudinal Study of Parents and Children.

    PubMed

    Kendler, Kenneth S; Gardner, Charles O; Hickman, Matt; Heron, Jon; Macleod, John; Lewis, Glyn; Dick, Danielle M

    2014-07-01

    Prior studies of the relationship between socioeconomic status (SES) and alcohol consumption and problems in adolescence have been inconclusive. Few studies have examined all three major SES indicators and a broad range of alcohol-related outcomes at different ages. In the Avon Longitudinal Study of Parents and Children cohort, we examined (by logistic regression, with differential weighting to control for attrition) the relationship between family income and parental education and occupational status, and five alcohol outcomes assessed at ages 16 and 18 years. At age 16, high SES-as indexed by income and education-significantly predicted frequent alcohol consumption. Low SES-as measured by education and occupational status-predicted alcohol-related problems. At age 18, high SES-particularly income and education-significantly predicted frequent alcohol consumption and heavy episodic drinking and, more weakly, symptoms of alcohol dependence. All three measures of SES were inversely related to high-quantity consumption and alcohol behavioral problems. In adolescents in the United Kingdom, the relationship between SES and alcohol-related behaviors is complex and varies as a function of age, SES measure, and specific outcome. High SES tends to predict increased consumption and, in later adolescence, heavy episodic drinking and perhaps symptoms of alcohol dependence. Low SES predicts alcohol-related behavioral problems and, in later adolescence, high-quantity alcohol consumption.

  2. Innovation processes in technologies for the processing of refractory mineral raw materials

    NASA Astrophysics Data System (ADS)

    Chanturiya, V. A.

    2008-12-01

    Analysis of the grade of mineral resources of Russia and other countries shows that end products that are competitive in terms of both technological and environmental criteria in the world market can only be obtained by the development and implementation of progressive technologies based on the up-to-date achievements of fundamental sciences. The essence of modern innovation processes in technologies developed in Russia for the complex and comprehensive processing of refractory raw materials with a complex composition is ascertained. These processes include (i) radiometric methods of concentration of valuable components, (ii) high-energy methods of disintegration of highly dispersed mineral components, and (iii) electrochemical methods of water conditioning to obtain target products for solving specific technological problems.

  3. Groundwater geochemistry of a Mio-Pliocene aquifer in the northeastern Algerian Sahara (Djamaa region)

    NASA Astrophysics Data System (ADS)

    Houari, Idir Menad; Nezli, Imed Eddine; Belksier, Mohamed Salah

    2018-05-01

    The groundwater resources in the Northern Sahara are represented by two superimposed major aquifer systems: the Intercalary Continental (CI) and the Terminal Complex (CT). The waters of these aquifers pose serious physical and chemical quality problems; they are highly mineralized and very hard. The present work aims to describe the water's geochemical evolution of sand groundwater (Mio-Pliocene) of the Terminal Complex in the area of Djamaa, by the research of the relationship between water's chemical composition and lithology of aquifer formations through. The results obtained show that the water's chemistry is essentially governed by the dissolution of evaporate formations, which gives to, waters an excessive mineralization expressed by high concentrations of sulfates, chlorides and sodium.

  4. Alternative Architectures for Distributed Cooperative Problem-Solving in the National Airspace System

    NASA Technical Reports Server (NTRS)

    Smith, Phillip J.; Billings, Charles; McCoy, C. Elaine; Orasanu, Judith

    1999-01-01

    The air traffic management system in the United States is an example of a distributed problem solving system. It has elements of both cooperative and competitive problem-solving. This system includes complex organizations such as Airline Operations Centers (AOCs), the FAA Air Traffic Control Systems Command Center (ATCSCC), and traffic management units (TMUs) at enroute centers and TRACONs, all of which have a major focus on strategic decision-making. It also includes individuals concerned more with tactical decisions (such as air traffic controllers and pilots). The architecture for this system has evolved over time to rely heavily on the distribution of tasks and control authority in order to keep cognitive complexity manageable for any one individual operator, and to provide redundancy (both human and technological) to serve as a safety net to catch the slips or mistakes that any one person or entity might make. Currently, major changes are being considered for this architecture, especially with respect to the locus of control, in an effort to improve efficiency and safety. This paper uses a series of case studies to help evaluate some of these changes from the perspective of system complexity, and to point out possible alternative approaches that might be taken to improve system performance. The paper illustrates the need to maintain a clear understanding of what is required to assure a high level of performance when alternative system architectures and decompositions are developed.

  5. Perceptual learning modules in mathematics: enhancing students' pattern recognition, structure extraction, and fluency.

    PubMed

    Kellman, Philip J; Massey, Christine M; Son, Ji Y

    2010-04-01

    Learning in educational settings emphasizes declarative and procedural knowledge. Studies of expertise, however, point to other crucial components of learning, especially improvements produced by experience in the extraction of information: perceptual learning (PL). We suggest that such improvements characterize both simple sensory and complex cognitive, even symbolic, tasks through common processes of discovery and selection. We apply these ideas in the form of perceptual learning modules (PLMs) to mathematics learning. We tested three PLMs, each emphasizing different aspects of complex task performance, in middle and high school mathematics. In the MultiRep PLM, practice in matching function information across multiple representations improved students' abilities to generate correct graphs and equations from word problems. In the Algebraic Transformations PLM, practice in seeing equation structure across transformations (but not solving equations) led to dramatic improvements in the speed of equation solving. In the Linear Measurement PLM, interactive trials involving extraction of information about units and lengths produced successful transfer to novel measurement problems and fraction problem solving. Taken together, these results suggest (a) that PL techniques have the potential to address crucial, neglected dimensions of learning, including discovery and fluent processing of relations; (b) PL effects apply even to complex tasks that involve symbolic processing; and (c) appropriately designed PL technology can produce rapid and enduring advances in learning. Copyright © 2009 Cognitive Science Society, Inc.

  6. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  7. When Districts Encounter Teacher Shortages: The Challenges of Recruiting and Retaining Mathematics Teachers in Urban Districts

    ERIC Educational Resources Information Center

    Liu, Edward; Rosenstein, Joseph G.; Swan, Aubrie E.; Khalil, Deena

    2008-01-01

    Administrators in six urban districts were interviewed to understand the nature and extent of their problems with recruiting and retaining high quality mathematics teachers. Findings suggest that the math staffing challenge is quite complex, and administrators have had to make difficult compromises because of deficiencies in the quantity and…

  8. Experiencing a Mathematical Problem-Solving Teaching Approach: Opportunities to Identify Ambitious Teaching Practices

    ERIC Educational Resources Information Center

    Bailey, Judy; Taylor, Merilyn

    2015-01-01

    Learning to teach is a complex matter, and many different models of pre-service teacher education have been used to support novice teachers' preparation for the classroom. More recently there have been calls for a focus on core high-leverage teaching practices and for novice teachers to engage in representations, decompositions, and approximations…

  9. Subtypes in 22q11.2 Deletion Syndrome Associated with Behaviour and Neurofacial Morphology

    ERIC Educational Resources Information Center

    Sinderberry, Brooke; Brown, Scott; Hammond, Peter; Stevens, Angela F.; Schall, Ulrich; Murphy, Declan G. M.; Murphy, Kieran C.; Campbell, Linda E.

    2013-01-01

    22q11.2 deletion syndrome (22q11DS) has a complex phenotype with more than 180 characteristics, including cardiac anomalies, cleft palate, intellectual disabilities, a typical facial morphology, and mental health problems. However, the variable phenotype makes it difficult to predict clinical outcome, such as the high prevalence of psychosis among…

  10. Teaching and Assessing Tag Rugby Made Simple

    ERIC Educational Resources Information Center

    Harvey, Stephen; Hughes, Christopher

    2009-01-01

    The game of rugby is a fast and fluid invasion game, similar to football, that involves scoring with an oval ball into an end zone. The game presents, like other invasion games, a series of highly complex tactical problems so that the ball can be maneuvered into a scoring position. Pugh and Alford (2004) recently indicated that rugby is now…

  11. U.S. Naval War College Global 2014 Game Report

    DTIC Science & Technology

    2015-02-01

    Gaming Department faculty and documents the findings of these efforts. The War Gaming Department conducts high quality research , analysis, gaming, and...Navy. It strives to provide interested parties with intellectually honest analysis of complex problems using a wide range of research tools and...3   II. Research Methodology and Game Design .............................................................. 5   Research Questions

  12. Learning Outcomes from Business Simulation Exercises: Challenges for the Implementation of Learning Technologies

    ERIC Educational Resources Information Center

    Clarke, Elizabeth

    2009-01-01

    Purpose: High order leadership, problem solving skills, and the capacity for innovation in new markets, and technologically complex and multidimensional contexts, are the new set of skills that are most valued by companies and employers alike. Business simulation exercises are one way of enhancing these skills. This article aims to examine the…

  13. Climbing up the Leaderboard: An Empirical Study of Applying Gamification Techniques to a Computer Programming Class

    ERIC Educational Resources Information Center

    Fotaris, Panagiotis; Mastoras, Theodoros; Leinfellner, Richard; Rosunally, Yasmine

    2016-01-01

    Conventional taught learning practices often experience difficulties in keeping students motivated and engaged. Video games, however, are very successful at sustaining high levels of motivation and engagement through a set of tasks for hours without apparent loss of focus. In addition, gamers solve complex problems within a gaming environment…

  14. Web-Mediated Problem-Based Learning and Computer Programming: Effects of Study Approach on Academic Achievement and Attitude

    ERIC Educational Resources Information Center

    Yagci, Mustafa

    2018-01-01

    In the relevant literature, it is often debated whether learning programming requires high-level thinking skills, the lack of which consequently results in the failure of students in programming. The complex nature of programming and individual differences, including study approaches, thinking styles, and the focus of supervision, all have an…

  15. A Model for Developing Improvements to Critical Thinking Skills across the Community College Curriculum

    ERIC Educational Resources Information Center

    McGarrity, DeShawn N.

    2013-01-01

    Society is faced with more complex problems than in the past because of rapid advancements in technology. These complex problems require multi-dimensional problem-solving abilities that are consistent with higher-order thinking skills. Bok (2006) posits that over 90% of U.S. faculty members consider critical thinking skills as essential for…

  16. A Real-Life Case Study of Audit Interactions--Resolving Messy, Complex Problems

    ERIC Educational Resources Information Center

    Beattie, Vivien; Fearnley, Stella; Hines, Tony

    2012-01-01

    Real-life accounting and auditing problems are often complex and messy, requiring the synthesis of technical knowledge in addition to the application of generic skills. To help students acquire the necessary skills to deal with these problems effectively, educators have called for the use of case-based methods. Cases based on real situations (such…

  17. Student Learning of Complex Earth Systems: A Model to Guide Development of Student Expertise in Problem-Solving

    ERIC Educational Resources Information Center

    Holder, Lauren N.; Scherer, Hannah H.; Herbert, Bruce E.

    2017-01-01

    Engaging students in problem-solving concerning environmental issues in near-surface complex Earth systems involves developing student conceptualization of the Earth as a system and applying that scientific knowledge to the problems using practices that model those used by professionals. In this article, we review geoscience education research…

  18. Utilizing a Sense of Community Theory in Order to Optimize Interagency Response to Complex Contingencies

    DTIC Science & Technology

    2010-06-01

    of Not at all Somewhat Mostly Completely membership such as clothes , signs, art, architecture, logos , landmarks, and flags that people can...on a ?whole of nation? approach to solving complex problems. Psychological sense of community (PSOC) theory provides the link that explains how an...States during complex contingency operations depends on a “whole of nation” approach to solving complex problems. Psychological sense of community

  19. Solution of a Complex Least Squares Problem with Constrained Phase.

    PubMed

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  20. Factors Affecting Police Officers' Acceptance of GIS Technologies: A Study of the Turkish National Police

    ERIC Educational Resources Information Center

    Cakar, Bekir

    2011-01-01

    The situations and problems that police officers face are more complex in today's society, due in part to the increase of technology and growing complexity of globalization. Accordingly, to solve these problems and deal with the complexities, law enforcement organizations develop and apply new techniques and methods such as geographic information…

  1. Anodal Transcranial Direct Current Stimulation of the Prefrontal Cortex Enhances Complex Verbal Associative Thought

    ERIC Educational Resources Information Center

    Cerruti, Carlo; Schlaug, Gottfried

    2009-01-01

    The remote associates test (RAT) is a complex verbal task with associations to both creative thought and general intelligence. RAT problems require not only lateral associations and the internal production of many words but a convergent focus on a single answer. Complex problem-solving of this sort may thus require both substantial verbal…

  2. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  3. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  4. Generalist solutions to complex problems: generating practice-based evidence - the example of managing multi-morbidity

    PubMed Central

    2013-01-01

    Background A growing proportion of people are living with long term conditions. The majority have more than one. Dealing with multi-morbidity is a complex problem for health systems: for those designing and implementing healthcare as well as for those providing the evidence informing practice. Yet the concept of multi-morbidity (the presence of >2 diseases) is a product of the design of health care systems which define health care need on the basis of disease status. So does the solution lie in an alternative model of healthcare? Discussion Strengthening generalist practice has been proposed as part of the solution to tackling multi-morbidity. Generalism is a professional philosophy of practice, deeply known to many practitioners, and described as expertise in whole person medicine. But generalism lacks the evidence base needed by policy makers and planners to support service redesign. The challenge is to fill this practice-research gap in order to critically explore if and when generalist care offers a robust alternative to management of this complex problem. We need practice-based evidence to fill this gap. By recognising generalist practice as a ‘complex intervention’ (intervening in a complex system), we outline an approach to evaluate impact using action-research principles. We highlight the implications for those who both commission and undertake research in order to tackle this problem. Summary Answers to the complex problem of multi-morbidity won’t come from doing more of the same. We need to change systems of care, and so the systems for generating evidence to support that care. This paper contributes to that work through outlining a process for generating practice-based evidence of generalist solutions to the complex problem of person-centred care for people with multi-morbidity. PMID:23919296

  5. Generalist solutions to complex problems: generating practice-based evidence--the example of managing multi-morbidity.

    PubMed

    Reeve, Joanne; Blakeman, Tom; Freeman, George K; Green, Larry A; James, Paul A; Lucassen, Peter; Martin, Carmel M; Sturmberg, Joachim P; van Weel, Chris

    2013-08-07

    A growing proportion of people are living with long term conditions. The majority have more than one. Dealing with multi-morbidity is a complex problem for health systems: for those designing and implementing healthcare as well as for those providing the evidence informing practice. Yet the concept of multi-morbidity (the presence of >2 diseases) is a product of the design of health care systems which define health care need on the basis of disease status. So does the solution lie in an alternative model of healthcare? Strengthening generalist practice has been proposed as part of the solution to tackling multi-morbidity. Generalism is a professional philosophy of practice, deeply known to many practitioners, and described as expertise in whole person medicine. But generalism lacks the evidence base needed by policy makers and planners to support service redesign. The challenge is to fill this practice-research gap in order to critically explore if and when generalist care offers a robust alternative to management of this complex problem. We need practice-based evidence to fill this gap. By recognising generalist practice as a 'complex intervention' (intervening in a complex system), we outline an approach to evaluate impact using action-research principles. We highlight the implications for those who both commission and undertake research in order to tackle this problem. Answers to the complex problem of multi-morbidity won't come from doing more of the same. We need to change systems of care, and so the systems for generating evidence to support that care. This paper contributes to that work through outlining a process for generating practice-based evidence of generalist solutions to the complex problem of person-centred care for people with multi-morbidity.

  6. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays

    PubMed Central

    Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok

    2016-01-01

    The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart. PMID:26821029

  7. Modeling and simulation of dynamic ant colony's labor division for task allocation of UAV swarm

    NASA Astrophysics Data System (ADS)

    Wu, Husheng; Li, Hao; Xiao, Renbin; Liu, Jie

    2018-02-01

    The problem of unmanned aerial vehicle (UAV) task allocation not only has the intrinsic attribute of complexity, such as highly nonlinear, dynamic, highly adversarial and multi-modal, but also has a better practicability in various multi-agent systems, which makes it more and more attractive recently. In this paper, based on the classic fixed response threshold model (FRTM), under the idea of "problem centered + evolutionary solution" and by a bottom-up way, the new dynamic environmental stimulus, response threshold and transition probability are designed, and a dynamic ant colony's labor division (DACLD) model is proposed. DACLD allows a swarm of agents with a relatively low-level of intelligence to perform complex tasks, and has the characteristic of distributed framework, multi-tasks with execution order, multi-state, adaptive response threshold and multi-individual response. With the proposed model, numerical simulations are performed to illustrate the effectiveness of the distributed task allocation scheme in two situations of UAV swarm combat (dynamic task allocation with a certain number of enemy targets and task re-allocation due to unexpected threats). Results show that our model can get both the heterogeneous UAVs' real-time positions and states at the same time, and has high degree of self-organization, flexibility and real-time response to dynamic environments.

  8. Characterisation of High Grazing Angle X-band Sea-clutter Doppler Spectra

    DTIC Science & Technology

    2013-08-01

    0397 2 Background The ocean surface is a highly complex dynamical system and relating Doppler spectra to surface conditions is a difficult problem...1966] then extended this theory to water and classified it as a ‘slightly rough’ surface. He showed that the scattering elements of primary importance...incidence field. This is the definition for the Bragg water -wave propagation number defined in the spatial frequency domain as kw = 2k0 cos θ, where

  9. USMC Ground Surveillance Robot (GSR): Lessons Learned

    NASA Astrophysics Data System (ADS)

    Harmon, S. Y.

    1987-02-01

    This paper describes the design of an autonomous vehicle and the lessons learned during the implementation of that complex robot. The major problems encountered to which solutions were found include sensor processing bandwidth limitations, coordination of the interactions between major subsystems, sensor data fusion and system knowledge representation. Those problems remaining unresolved include system complexity management, the lack of powerful system monitoring and debugging tools, exploratory implementation of a complex system and safety and testing issues. Many of these problems arose from working with underdeveloped and continuously evolving technology and will probably be resolved as the technological resources mature and stabilize. Unfortunately, other problems will continue to plague developers throughout the evolution of autonomous system technology.

  10. Mexican high school students' social representations of mathematics, its teaching and learning

    NASA Astrophysics Data System (ADS)

    Martínez-Sierra, Gustavo; Miranda-Tirado, Marisa

    2015-07-01

    This paper reports a qualitative research that identifies Mexican high school students' social representations of mathematics. For this purpose, the social representations of 'mathematics', 'learning mathematics' and 'teaching mathematics' were identified in a group of 50 students. Focus group interviews were carried out in order to obtain the data. The constant comparative style was the strategy used for the data analysis because it allowed the categories to emerge from the data. The students' social representations are: (A) Mathematics is…(1) important for daily life, (2) important for careers and for life, (3) important because it is in everything that surrounds us, (4) a way to solve problems of daily life, (5) calculations and operations with numbers, (6) complex and difficult, (7) exact and (6) a subject that develops thinking skills; (B) To learn mathematics is…(1) to possess knowledge to solve problems, (2) to be able to solve everyday problems, (3) to be able to make calculations and operations, and (4) to think logically to be able to solve problems; and (C) To teach mathematics is…(1) to transmit knowledge, (2) to know to share it, (3) to transmit the reasoning ability, and (4) to show how to solve problems.

  11. How Can We Improve Problem Solving in Undergraduate Biology? Applying Lessons from 30 Years of Physics Education Research

    PubMed Central

    Hoskinson, A.-M.; Caballero, M. D.; Knight, J. K.

    2013-01-01

    If students are to successfully grapple with authentic, complex biological problems as scientists and citizens, they need practice solving such problems during their undergraduate years. Physics education researchers have investigated student problem solving for the past three decades. Although physics and biology problems differ in structure and content, the instructional purposes align closely: explaining patterns and processes in the natural world and making predictions about physical and biological systems. In this paper, we discuss how research-supported approaches developed by physics education researchers can be adopted by biologists to enhance student problem-solving skills. First, we compare the problems that biology students are typically asked to solve with authentic, complex problems. We then describe the development of research-validated physics curricula emphasizing process skills in problem solving. We show that solving authentic, complex biology problems requires many of the same skills that practicing physicists and biologists use in representing problems, seeking relationships, making predictions, and verifying or checking solutions. We assert that acquiring these skills can help biology students become competent problem solvers. Finally, we propose how biology scholars can apply lessons from physics education in their classrooms and inspire new studies in biology education research. PMID:23737623

  12. Prognosis and continuity of child mental health problems from preschool to primary school: results of a four-year longitudinal study.

    PubMed

    Beyer, Thomas; Postert, Christian; Müller, Jörg M; Furniss, Tilman

    2012-08-01

    In a four-year longitudinal study, changes in and continuity of behavioral and emotional problems were examined in 814 subjects from kindergarten to primary school. Mental health problems were assessed by means of the Child Behavior Checklist (CBCL). The distribution of the CBCL broadband groups revealed a high level of continuity of internalizing symptoms over the four-year period and a shift from externalizing symptoms at baseline towards a combination of internalizing and externalizing symptoms at follow-up. The presence of mental health problems at follow-up was correlated with gender (higher amongst boys), pre-existing mental health problems at baseline, and separation or divorce of the parents, but not with single-family status or the age and educational level of the mother. The increasing number of children with a combination of internalizing and externalizing symptoms demonstrates the increasing complexity of child mental health problems in the developmental span from preschool age to school age.

  13. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  14. Case study method and problem-based learning: utilizing the pedagogical model of progressive complexity in nursing education.

    PubMed

    McMahon, Michelle A; Christopher, Kimberly A

    2011-08-19

    As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students.

  15. Prospects of molybdenum and rhenium octahedral cluster complexes as X-ray contrast agents.

    PubMed

    Krasilnikova, Anna A; Shestopalov, Michael A; Brylev, Konstantin A; Kirilova, Irina A; Khripko, Olga P; Zubareva, Kristina E; Khripko, Yuri I; Podorognaya, Valentina T; Shestopalova, Lidiya V; Fedorov, Vladimir E; Mironov, Yuri V

    2015-03-01

    Investigation of new X-ray contrast media for radiography is an important field of science since discovering of X-rays in 1895. Despite the wide diversity of available X-ray contrast media the toxicity, especially nephrotoxicity, is still a big problem to be solved. The octahedral metal-cluster complexes of the general formula [{M6Q8}L6] can be considered as quite promising candidates for the role of new radiocontrast media due to the high local concentration of heavy elements, high tuning ability of ligand environment and low toxicity. To exemplify this, the X-ray computed tomography experiments for the first time were carried out on some octahedral cluster complexes of molybdenum and rhenium. Based on the obtained data it was proposed to investigate the toxicological proprieties of cluster complex Na2H8[{Re6Se8}(P(CH2CH2CONH2)(CH2CH2COO)2)6]. Observed low cytotoxic and acute toxic effects along with rapid renal excretion of the cluster complex evidence its perspective as an X-ray contrast media for radiography. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Capturing the complexity of first opinion small animal consultations using direct observation

    PubMed Central

    Robinson, N. J.; Brennan, M. L.; Cobb, M.; Dean, R. S.

    2015-01-01

    Various different methods are currently being used to capture data from small animal consultations. The aim of this study was to develop a tool to record detailed data from consultations by direct observation. A second aim was to investigate the complexity of the consultation by examining the number of problems discussed per patient. A data collection tool was developed and used during direct observation of small animal consultations in eight practices. Data were recorded on consultation type, patient signalment and number of problems discussed. During 16 weeks of data collection, 1901 patients were presented. Up to eight problems were discussed for some patients; more problems were discussed during preventive medicine consultations than during first consultations (P<0.001) or revisits (P<0.001). Fewer problems were discussed for rabbits than cats (P<0.001) or dogs (P<0.001). Age was positively correlated with discussion of specific health problems and negatively correlated with discussion of preventive medicine. Consultations are complex with multiple problems frequently discussed, suggesting comorbidity may be common. Future research utilising practice data should consider how much of this complexity needs to be captured, and use appropriate methods accordingly. The findings here have implications for directing research and education as well as application in veterinary practice. PMID:25262057

  17. Medicines counterfeiting is a complex problem: a review of key challenges across the supply chain.

    PubMed

    Tremblay, Michael

    2013-02-01

    The paper begins by asking why there is a market for counterfeit medicines, which in effect creates the problem of counterfeiting itself. Contributing factors include supply chain complexity and the lack of whole-systems thinking. These two underpin the author's view that counterfeiting is a complex (i.e. wicked) problem, and that corporate, public policy and regulatory actions need to be mindful of how their actions may be causal. The paper offers a problem-based review of key components of this complexity, viz., the knowledge end-users/consumers have of medicines; whether restrictive information policies may hamper information provision to patients; the internet's direct access to consumers; internet-enabled distribution of unsafe and counterfeit medicines; whether the internet is a parallel and competitive supply chain to legitimate routes; organised crime as an emerging medicines manufacturer and supplier and whether substandard medicines is really the bigger problem. Solutions respect the perceived complexity of the supply chain challenges. The paper identifies the need to avoid technologically-driven solutions, calling for 'technological agnosticism'. Both regulation and public policy need to reflect the dynamic nature of the problem and avoid creating perverse incentives; it may be, for instance, that medicines pricing and reimbursement policies, which affect consumer/patient access may act as market signals to counterfeiters, since this creates a cash market in cheaper drugs.

  18. Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ilbeigi, Shahab; Chelidze, David

    2017-11-01

    Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.

  19. Large Spatial and Temporal Separations of Cause and Effect in Policy Making - Dealing with Non-linear Effects

    NASA Astrophysics Data System (ADS)

    McCaskill, John

    There can be large spatial and temporal separation of cause and effect in policy making. Determining the correct linkage between policy inputs and outcomes can be highly impractical in the complex environments faced by policy makers. In attempting to see and plan for the probable outcomes, standard linear models often overlook, ignore, or are unable to predict catastrophic events that only seem improbable due to the issue of multiple feedback loops. There are several issues with the makeup and behaviors of complex systems that explain the difficulty many mathematical models (factor analysis/structural equation modeling) have in dealing with non-linear effects in complex systems. This chapter highlights those problem issues and offers insights to the usefulness of ABM in dealing with non-linear effects in complex policy making environments.

  20. Dimensionality of visual complexity in computer graphics scenes

    NASA Astrophysics Data System (ADS)

    Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce

    2008-02-01

    How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.

  1. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  2. Methods for solving reasoning problems in abstract argumentation – A survey

    PubMed Central

    Charwat, Günther; Dvořák, Wolfgang; Gaggl, Sarah A.; Wallner, Johannes P.; Woltran, Stefan

    2015-01-01

    Within the last decade, abstract argumentation has emerged as a central field in Artificial Intelligence. Besides providing a core formalism for many advanced argumentation systems, abstract argumentation has also served to capture several non-monotonic logics and other AI related principles. Although the idea of abstract argumentation is appealingly simple, several reasoning problems in this formalism exhibit high computational complexity. This calls for advanced techniques when it comes to implementation issues, a challenge which has been recently faced from different angles. In this survey, we give an overview on different methods for solving reasoning problems in abstract argumentation and compare their particular features. Moreover, we highlight available state-of-the-art systems for abstract argumentation, which put these methods to practice. PMID:25737590

  3. Interference elimination in digital controllers of automation systems of oil and gas complex

    NASA Astrophysics Data System (ADS)

    Solomentsev, K. Yu; Fugarov, D. D.; Purchina, O. A.; Poluyan, A. Y.; Nesterchuk, V. V.; Petrenkova, S. B.

    2018-05-01

    The given article considers the problems arising in the process of digital governors development for the systems of automatic control. In the case of interference, and also in case of high frequency of digitization, digital differentiation gives a big error. The problem is that the derivative is calculated as the difference of two close variables. The method of differentiation is offered to reduce this error, when there is a case of averaging the difference quotient of the series of meanings. The structure chart for the implementation of this differentiation method is offered in the case of governors construction.

  4. Algorithms for elasto-plastic-creep postbuckling

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1984-01-01

    This paper considers the development of an improved constrained time stepping scheme which can efficiently and stably handle the pre-post-buckling behavior of general structure subject to high temperature environments. Due to the generality of the scheme, the combined influence of elastic-plastic behavior can be handled in addition to time dependent creep effects. This includes structural problems exhibiting indefinite tangent properties. To illustrate the capability of the procedure, several benchmark problems employing finite element analyses are presented. These demonstrate the numerical efficiency and stability of the scheme. Additionally, the potential influence of complex creep histories on the buckling characteristics is considered.

  5. High-resolution numerical approximation of traffic flow problems with variable lanes and free-flow velocities.

    PubMed

    Zhang, Peng; Liu, Ru-Xun; Wong, S C

    2005-05-01

    This paper develops macroscopic traffic flow models for a highway section with variable lanes and free-flow velocities, that involve spatially varying flux functions. To address this complex physical property, we develop a Riemann solver that derives the exact flux values at the interface of the Riemann problem. Based on this solver, we formulate Godunov-type numerical schemes to solve the traffic flow models. Numerical examples that simulate the traffic flow around a bottleneck that arises from a drop in traffic capacity on the highway section are given to illustrate the efficiency of these schemes.

  6. Intermunicipal health care consortia in Brazil: strategic behavior, incentives and sustainability.

    PubMed

    Teixeira, Luciana; Bugarin, Mauricio; Dourado, Maria Cristina

    2006-01-01

    This article studies strategic behavior in municipal health care consortia where neighboring municipalities form a partnership to supply high-complexity health care. Each municipality partially funds the organization. Depending on the partnership contract, a free rider problem may jeopardize the organization. A municipality will default its payments if it can still benefit from the services, especially when political pressures for competing expenditure arise. The main result is that the partnership sustainability depends on punishment mechanisms to a defaulting member, the gains from joint provision of services and the overall economic environment. Possible solutions to the incentive problem are discussed.

  7. Seeing around a Ball: Complex, Technology-Based Problems in Calculus with Applications in Science and Engineering-Redux

    ERIC Educational Resources Information Center

    Winkel, Brian

    2008-01-01

    A complex technology-based problem in visualization and computation for students in calculus is presented. Strategies are shown for its solution and the opportunities for students to put together sequences of concepts and skills to build for success are highlighted. The problem itself involves placing an object under water in order to actually see…

  8. Cognitive and Motivational Impacts of Learning Game Design on Middle School Children

    ERIC Educational Resources Information Center

    Akcaoglu, Mete

    2013-01-01

    In today`s complex and fast-evolving world, problem solving is an important skill to possess. For young children to be successful at their future careers, they need to have the "skill" and the "will" to solve complex problems that are beyond the well-defined problems that they learn to solve at schools. One promising approach…

  9. Using Educational Data Mining Methods to Assess Field-Dependent and Field-Independent Learners' Complex Problem Solving

    ERIC Educational Resources Information Center

    Angeli, Charoula; Valanides, Nicos

    2013-01-01

    The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…

  10. Optimization of controlled processes in combined-cycle plant (new developments and researches)

    NASA Astrophysics Data System (ADS)

    Tverskoy, Yu S.; Muravev, I. K.

    2017-11-01

    All modern complex technical systems, including power units of TPP and nuclear power plants, work in the system-forming structure of multifunctional APCS. The development of the modern APCS mathematical support allows bringing the automation degree to the solution of complex optimization problems of equipment heat-mass-exchange processes in real time. The difficulty of efficient management of a binary power unit is related to the need to solve jointly at least three problems. The first problem is related to the physical issues of combined-cycle technologies. The second problem is determined by the criticality of the CCGT operation to changes in the regime and climatic factors. The third problem is related to a precise description of a vector of controlled coordinates of a complex technological object. To obtain a joint solution of this complex of interconnected problems, the methodology of generalized thermodynamic analysis, methods of the theory of automatic control and mathematical modeling are used. In the present report, results of new developments and studies are shown. These results allow improving the principles of process control and the automatic control systems structural synthesis of power units with combined-cycle plants that provide attainable technical and economic efficiency and operational reliability of equipment.

  11. Midbond basis functions for weakly bound complexes

    NASA Astrophysics Data System (ADS)

    Shaw, Robert A.; Hill, J. Grant

    2018-06-01

    Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.

  12. Expert systems for superalloy studies

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Kaukler, William F.

    1990-01-01

    There are many areas in science and engineering which require knowledge of an extremely complex foundation of experimental results in order to design methodologies for developing new materials or products. Superalloys are an area which fit well into this discussion in the sense that they are complex combinations of elements which exhibit certain characteristics. Obviously the use of superalloys in high performance, high temperature systems such as the Space Shuttle Main Engine is of interest to NASA. The superalloy manufacturing process is complex and the implementation of an expert system within the design process requires some thought as to how and where it should be implemented. A major motivation is to develop a methodology to assist metallurgists in the design of superalloy materials using current expert systems technology. Hydrogen embrittlement is disasterous to rocket engines and the heuristics can be very complex. Attacking this problem as one module in the overall design process represents a significant step forward. In order to describe the objectives of the first phase implementation, the expert system was designated Hydrogen Environment Embrittlement Expert System (HEEES).

  13. Report to Congress on the U.S. Department of Energy`s Environmental Management Science Program: Research funded and its linkages to environmental cleanup problems, and high out-year cost environmental management project descriptions. Volume 3 of 3 -- Appendix C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-04-01

    The Department of Energy`s Environmental Management Science Program (EMSP) serves as a catalyst for the application of scientific discoveries to the development and deployment of technologies that will lead to reduction of the costs and risks associated with cleaning up the nation`s nuclear complex. Appendix C provides details about each of the Department`s 82 high cost projects and lists the EMSP research awards with potential to impact each of these projects. The high cost projects listed are those having costs greater than $50 million in constant 1998 dollars from the year 2007 and beyond, based on the March 1998 Acceleratingmore » Cleanup: Paths to Closure Draft data, and having costs of quantities of material associated with an environmental management problem area. The high cost project information is grouped by operations office and organized by site and project code. Each operations office section begins with a list of research needs associated with that operations office. Potentially related research awards are listed by problem area in the Index of Research Awards by Environmental Management Problem Area, which can be found at the end of appendices B and C. For projects that address high risks to the public, workers, or the environment, refer also the Health/Ecology/Risk problem area awards. Research needs are programmatic or technical challenges that may benefit from knowledge gained through basic research.« less

  14. The Impact of Adaptive Complex Assessment on the HOT Skill Development of Students

    ERIC Educational Resources Information Center

    Raiyn, Jamal; Tilchin, Oleg

    2016-01-01

    In this paper we propose a method for the adaptive complex assessment (ACA) of the higher-order thinking (HOT) skills needed by students for problem solving, and we examine the impact of the method on the development of HOT skills in a problem-based learning (PBL) environment. Complexity in the assessment is provided by initial, formative, and…

  15. Individual Differences in Students' Complex Problem Solving Skills: How They Evolve and What They Imply

    ERIC Educational Resources Information Center

    Wüstenberg, Sascha; Greiff, Samuel; Vainikainen, Mari-Pauliina; Murphy, Kevin

    2016-01-01

    Changes in the demands posed by increasingly complex workplaces in the 21st century have raised the importance of nonroutine skills such as complex problem solving (CPS). However, little is known about the antecedents and outcomes of CPS, especially with regard to malleable external factors such as classroom climate. To investigate the relations…

  16. Flow simulations about steady-complex and unsteady moving configurations using structured-overlapped and unstructured grids

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1995-01-01

    The limiting factor in simulating flows past realistic configurations of interest has been the discretization of the physical domain on which the governing equations of fluid flow may be solved. In an attempt to circumvent this problem, many Computational Fluid Dynamic (CFD) methodologies that are based on different grid generation and domain decomposition techniques have been developed. However, due to the costs involved and expertise required, very few comparative studies between these methods have been performed. In the present work, the two CFD methodologies which show the most promise for treating complex three-dimensional configurations as well as unsteady moving boundary problems are evaluated. These are namely the structured-overlapped and the unstructured grid schemes. Both methods use a cell centered, finite volume, upwind approach. The structured-overlapped algorithm uses an approximately factored, alternating direction implicit scheme to perform the time integration, whereas, the unstructured algorithm uses an explicit Runge-Kutta method. To examine the accuracy, efficiency, and limitations of each scheme, they are applied to the same steady complex multicomponent configurations and unsteady moving boundary problems. The steady complex cases consist of computing the subsonic flow about a two-dimensional high-lift multielement airfoil and the transonic flow about a three-dimensional wing/pylon/finned store assembly. The unsteady moving boundary problems are a forced pitching oscillation of an airfoil in a transonic freestream and a two-dimensional, subsonic airfoil/store separation sequence. Accuracy was accessed through the comparison of computed and experimentally measured pressure coefficient data on several of the wing/pylon/finned store assembly's components and at numerous angles-of-attack for the pitching airfoil. From this study, it was found that both the structured-overlapped and the unstructured grid schemes yielded flow solutions of comparable accuracy for these simulations. This study also indicated that, overall, the structured-overlapped scheme was slightly more CPU efficient than the unstructured approach.

  17. Fluorophore Metal-Organic Complexes: High-Throughput Optical Screening for Aprotic Electrochemical Systems.

    PubMed

    Park, Sung Hyeon; Choi, Chang Hyuck; Lee, Seung Yong; Woo, Seong Ihl

    2017-02-13

    Combinatorial optical screening of aprotic electrocatalysts has not yet been achieved primarily due to H + -associated mechanisms of fluorophore modulation. We have overcome this problem by using fluorophore metal-organic complexes. In particular, eosin Y and quinine can be coordinated with various metallic cations (e.g., Li + , Na + , Mg 2+ , Zn 2+ , and Al 3+ ) in aprotic solvents, triggering changes in their fluorescent properties. These interactions have been used in a reliable screening method to determine oxygen reduction/evolution reaction activities of 100 Mn-based binary catalysts for the aprotic Li-air battery.

  18. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  19. On the problem of constructing a modern, economic radiotelescope complex

    NASA Technical Reports Server (NTRS)

    Bogomolov, A. F.; Sokolov, A. G.; Poperechenko, B. A.; Polyak, V. S.

    1977-01-01

    Criteria for comparing and planning the technical and economic characteristics of large parabolic reflector antenna systems and other types used in radioastronomy and deep space communications are discussed. The experience gained in making and optimizing a series of highly efficient parabolic antennas in the USSR is reviewed. Several ways are indicated for further improving the complex characteristics of antennas similar to the original TNA-1500 64m radio telescope. The suggestions can be applied in planning the characteristics of radiotelescopes which are now being built, in particular, the TNA-8000 with a diameter of 128 m.

  20. Design of supercritical cascades with high solidity

    NASA Technical Reports Server (NTRS)

    Sanz, J. M.

    1982-01-01

    The method of complex characteristics of Garabedian and Korn was successfully used to design shockless cascades with solidities of up to one. A code was developed using this method and a new hodograph transformation of the flow onto an ellipse. This code allows the design of cascades with solidities of up to two and larger turning angles. The equations of potential flow are solved in a complex hodograph like domain by setting a characteristic initial value problem and integrating along suitable paths. The topology that the new mapping introduces permits a simpler construction of these paths of integration.

  1. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  2. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  3. Overview of Infrastructure Science and Analysis for Homeland Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backhaus, Scott N.

    This presentation offers an analysis of infrastructure science with goals to provide third-party independent science based input into complex problems of national concern and to use scientific analysis to "turn down the noise" around complex problems.

  4. SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES

    EPA Science Inventory

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems, due to the complex nature of the problems, the need for complex assessments, and complicated ...

  5. Semantic Annotation of Complex Text Structures in Problem Reports

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Throop, David R.; Fleming, Land D.

    2011-01-01

    Text analysis is important for effective information retrieval from databases where the critical information is embedded in text fields. Aerospace safety depends on effective retrieval of relevant and related problem reports for the purpose of trend analysis. The complex text syntax in problem descriptions has limited statistical text mining of problem reports. The presentation describes an intelligent tagging approach that applies syntactic and then semantic analysis to overcome this problem. The tags identify types of problems and equipment that are embedded in the text descriptions. The power of these tags is illustrated in a faceted searching and browsing interface for problem report trending that combines automatically generated tags with database code fields and temporal information.

  6. The bright side of being blue: Depression as an adaptation for analyzing complex problems

    PubMed Central

    Andrews, Paul W.; Thomson, J. Anderson

    2009-01-01

    Depression ranks as the primary emotional problem for which help is sought. Depressed people often have severe, complex problems, and rumination is a common feature. Depressed people often believe that their ruminations give them insight into their problems, but clinicians often view depressive rumination as pathological because it is difficult to disrupt and interferes with the ability to concentrate on other things. Abundant evidence indicates that depressive rumination involves the analysis of episode-related problems. Because analysis is time consuming and requires sustained processing, disruption would interfere with problem-solving. The analytical rumination (AR) hypothesis proposes that depression is an adaptation that evolved as a response to complex problems and whose function is to minimize disruption of rumination and sustain analysis of complex problems. It accomplishes this by giving episode-related problems priority access to limited processing resources, by reducing the desire to engage in distracting activities (anhedonia), and by producing psychomotor changes that reduce exposure to distracting stimuli. Because processing resources are limited, the inability to concentrate on other things is a tradeoff that must be made to sustain analysis of the triggering problem. The AR hypothesis is supported by evidence from many levels, including genes, neurotransmitters and their receptors, neurophysiology, neuroanatomy, neuroenergetics, pharmacology, cognition and behavior, and the efficacy of treatments. In addition, we address and provide explanations for puzzling findings in the cognitive and behavioral genetics literatures on depression. In the process, we challenge the belief that serotonin transmission is low in depression. Finally, we discuss implications of the hypothesis for understanding and treating depression. PMID:19618990

  7. Weighted SGD for ℓ p Regression with Randomized Preconditioning.

    PubMed

    Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W

    2016-01-01

    In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems-e.g., ℓ 2 and ℓ 1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓ p regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓ p solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ 1 regression with size n by d , pwSGD returns an approximate solution with ε relative error in the objective value in (log n ·nnz( A )+poly( d )/ ε 2 ) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ 2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in (log n ·nnz( A )+poly( d ) log(1/ ε )/ ε ) time. We show that for unconstrained ℓ 2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε , high dimension n and low dimension d satisfy d ≥ 1/ ε and n ≥ d 2 / ε . We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10 -3 , more quickly.

  8. Analysis of biosurfaces by neutron reflectometry: From simple to complex interfaces

    DOE PAGES

    Junghans, Ann; Watkins, Erik B.; Barker, Robert D.; ...

    2015-03-16

    Because of its high sensitivity for light elements and the scattering contrast manipulation via isotopic substitutions, neutron reflectometry (NR) is an excellent tool for studying the structure of soft-condensed material. These materials include model biophysical systems as well as in situ living tissue at the solid–liquid interface. The penetrability of neutrons makes NR suitable for probing thin films with thicknesses of 5–5000 Å at various buried, for example, solid–liquid, interfaces [J. Daillant and A. Gibaud, Lect. Notes Phys. 770, 133 (2009); G. Fragneto-Cusani, J. Phys.: Condens. Matter 13, 4973 (2001); J. Penfold, Curr. Opin. Colloid Interface Sci. 7, 139 (2002)].more » Over the past two decades, NR has evolved to become a key tool in the characterization of biological and biomimetic thin films. Highlighted In the current report are some of the authors' recent accomplishments in utilizing NR to study highly complex systems, including in-situ experiments. Such studies will result in a much better understanding of complex biological problems, have significant medical impact by suggesting innovative treatment, and advance the development of highly functionalized biomimetic materials.« less

  9. Reduced complexity of multi-track joint 2-D Viterbi detectors for bit-patterned media recording channel

    NASA Astrophysics Data System (ADS)

    Myint, L. M. M.; Warisarn, C.

    2017-05-01

    Two-dimensional (2-D) interference is one of the prominent challenges in ultra-high density recording system such as bit patterned media recording (BPMR). The multi-track joint 2-D detection technique with the help of the array-head reading can tackle this problem effectively by jointly processing the multiple readback signals from the adjacent tracks. Moreover, it can robustly alleviate the impairments due to track mis-registration (TMR) and media noise. However, the computational complexity of such detectors is normally too high and hard to implement in a reality, even for a few multiple tracks. Therefore, in this paper, we mainly focus on reducing the complexity of multi-track joint 2-D Viterbi detector without paying a large penalty in terms of the performance. We propose a simplified multi-track joint 2-D Viterbi detector with a manageable complexity level for the BPMR's multi-track multi-head (MTMH) system. In the proposed method, the complexity of detector's trellis is reduced with the help of the joint-track equalization method which employs 1-D equalizers and 2-D generalized partial response (GPR) target. Moreover, we also examine the performance of a full-fledged multi-track joint 2-D detector and the conventional 2-D detection. The results show that the simplified detector can perform close to the full-fledge detector, especially when the system faces high media noise, with the significant low complexity.

  10. Wicked Problems in Large Organizations: Why Pilot Retention Continues to Challenge the Air Force

    DTIC Science & Technology

    2017-05-25

    ABSTRACT This monograph in military studies investigates the makeup of and approach to complex problems, with a case study on the Air Force’s...priorities, as well as a short, recent history of the pilot retention problem. Following that is a case study on the work done by the Air Staff in...Lonsberry, USAF, 38 pages. This monograph in military studies investigates the makeup of and approach to complex problems, with a case study on the

  11. Specific features of modern multifunctional high-rise building construction

    NASA Astrophysics Data System (ADS)

    Manukhina, Lyubov; Samosudova, Natal'ja

    2018-03-01

    The article analyzes the main reasons for the development of high-rise building construction the most important of which-is a limitation of the urban areas and, consequently, the high price of land reserved for construction. New engineering and compositional solutions for the creation of new types of buildings are considered - complex technical designs of a large number of storeys completely meet the new requirements for safety and comfort. Some peculiarities of designing high-rise buildings and searching for optimal architectural and planning solutions are revealed since, with external architectural simplicity, high-rise buildings have complex structural and technological and space-planning solutions. We consider the specific features of a high-rise housing in various countries around the world, including Russia, such as the layout of the multi-storey residential buildings, depending on the climatic characteristics of the regions, assessment of the geological risk of the construction site, the choice of parameters and functional purpose of the sections of the territory of high-rise construction, location of the town-planning object for substantiating the overall dimensions of the building, assessment of changes aeration and engineering and hydrological conditions of the site. A special place in the article on the problems of improvement of the territory, the device of courtyards, landscaping, the device of playing and sports grounds. The main conclusion in the article is the following problem - when developing high-rise housing construction, the development of high-rise housing, and an increase in the population density in the territory of large cities of Russia, necessary to create a comfortable and safe level of residents living and not a decrease, but an improvement in the quality of the urban environment.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacón, L., E-mail: chacon@lanl.gov; Chen, G.; Knoll, D.A.

    We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLOmore » approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less

  13. GPU accelerated study of heat transfer and fluid flow by lattice Boltzmann method on CUDA

    NASA Astrophysics Data System (ADS)

    Ren, Qinlong

    Lattice Boltzmann method (LBM) has been developed as a powerful numerical approach to simulate the complex fluid flow and heat transfer phenomena during the past two decades. As a mesoscale method based on the kinetic theory, LBM has several advantages compared with traditional numerical methods such as physical representation of microscopic interactions, dealing with complex geometries and highly parallel nature. Lattice Boltzmann method has been applied to solve various fluid behaviors and heat transfer process like conjugate heat transfer, magnetic and electric field, diffusion and mixing process, chemical reactions, multiphase flow, phase change process, non-isothermal flow in porous medium, microfluidics, fluid-structure interactions in biological system and so on. In addition, as a non-body-conformal grid method, the immersed boundary method (IBM) could be applied to handle the complex or moving geometries in the domain. The immersed boundary method could be coupled with lattice Boltzmann method to study the heat transfer and fluid flow problems. Heat transfer and fluid flow are solved on Euler nodes by LBM while the complex solid geometries are captured by Lagrangian nodes using immersed boundary method. Parallel computing has been a popular topic for many decades to accelerate the computational speed in engineering and scientific fields. Today, almost all the laptop and desktop have central processing units (CPUs) with multiple cores which could be used for parallel computing. However, the cost of CPUs with hundreds of cores is still high which limits its capability of high performance computing on personal computer. Graphic processing units (GPU) is originally used for the computer video cards have been emerged as the most powerful high-performance workstation in recent years. Unlike the CPUs, the cost of GPU with thousands of cores is cheap. For example, the GPU (GeForce GTX TITAN) which is used in the current work has 2688 cores and the price is only 1,000 US dollars. The release of NVIDIA's CUDA architecture which includes both hardware and programming environment in 2007 makes GPU computing attractive. Due to its highly parallel nature, lattice Boltzmann method is successfully ported into GPU with a performance benefit during the recent years. In the current work, LBM CUDA code is developed for different fluid flow and heat transfer problems. In this dissertation, lattice Boltzmann method and immersed boundary method are used to study natural convection in an enclosure with an array of conduting obstacles, double-diffusive convection in a vertical cavity with Soret and Dufour effects, PCM melting process in a latent heat thermal energy storage system with internal fins, mixed convection in a lid-driven cavity with a sinusoidal cylinder, and AC electrothermal pumping in microfluidic systems on a CUDA computational platform. It is demonstrated that LBM is an efficient method to simulate complex heat transfer problems using GPU on CUDA.

  14. How Do High School Students Solve Probability Problems? A Mixed Methods Study on Probabilistic Reasoning

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Deleye, Maarten; Saenen, Lore; Van Dooren, Wim; Onghena, Patrick

    2018-01-01

    When studying a complex research phenomenon, a mixed methods design allows to answer a broader set of research questions and to tap into different aspects of this phenomenon, compared to a monomethod design. This paper reports on how a sequential equal status design (QUAN ? QUAL) was used to examine students' reasoning processes when solving…

  15. High Schoolers' Views on Academic Integrity

    ERIC Educational Resources Information Center

    Bacha, Nahla Nola; Bahous, Rima; Nabhani, Mona

    2012-01-01

    The issue of academic integrity in cheating on exams and plagiarising in writing is not a new one. All schools need to address this problem and some more than others. In the L2 context, the issues become more complex as non-native students need to adhere to the "culture of learning" of a Western model of academic integrity if they are to…

  16. Critical Success Factors (CSFs) for Implementation of Enterprise Resource Planning (ERP) Systems in Various Industries, Including Institutions of Higher Education (IHEs)

    ERIC Educational Resources Information Center

    Debrosse-Bruno, Marie Michael

    2017-01-01

    Enterprise Resource Planning (ERP) systems present a management problem for various industries including institutions of higher education (IHEs) because they are costly to acquire, challenging to implement, and often fail to meet anticipated expectations. ERP systems are highly complex due to the nature of the operations they support. This…

  17. The STEM Teacher Drought: Cracks and Disparities in California's Math and Science Teacher Pipeline

    ERIC Educational Resources Information Center

    Wolf, Leni

    2015-01-01

    In today's fast-moving and interconnected world, high school and college graduates must be able to think critically and generate creative solutions to address complex problems. With the world producing new knowledge at an exponential rate, we cannot anticipate what all these future challenges will be. Without a doubt, they will impact a society…

  18. Architecture Knowledge for Evaluating Scalable Databases

    DTIC Science & Technology

    2015-01-16

    problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly

  19. Kids Know Their Schools Best: Reaching out to Them Can Improve Designs and Build Community Good Will

    ERIC Educational Resources Information Center

    Carlson, Michael

    2010-01-01

    More now than ever, our schools need to reach out and engage students. Dropout rates are high, achievement lags and increasingly students view schools as out of touch with their lives and their futures. Solutions to these problems are complex but I believe that making learning environments reflect student attitudes and perspectives plays an…

  20. Development and genetics of brain temporal stability related to attention problems in adolescent twins.

    PubMed

    Smit, Dirk J A; Anokhin, Andrey P

    2017-05-01

    The brain continuously develops and reorganizes to support an expanding repertoire of behaviors and increasingly complex cognition. These processes may, however, also result in the appearance or disappearance of specific neurodevelopmental disorders such as attention problems. To investigate whether brain activity changed during adolescence, how genetics shape this change, and how these changes were related to attention problems, we measured EEG activity in 759 twins and siblings, assessed longitudinally in four waves (12, 14, 16, and 18years of age). Attention problems were assessed with the SWAN at waves 12, 14, and 16. To characterize functional brain development, we used a measure of temporal stability (TS) of brain oscillations over the recording time of 5min reflecting the tendency of a brain to maintain the same oscillatory state for longer or shorter periods. Increased TS may reflect the brain's tendency to maintain stability, achieve focused attention, and thus reduce "mind wandering" and attention problems. The results indicate that brain TS is increased across the scalp from 12 to 18. TS showed large individual differences that were heritable. Change in TS (alpha oscillations) was heritable between 12 and 14 and between 14 and 16 for the frontal brain areas. Absolute levels of brain TS at each wave were positively correlated with attention problems but not significantly. High and low attention problems subjects showed different developmental trajectories in TS, which was significant in a cluster of frontal leads. These results indicate that trajectories in brain TS development are a biomarker for the developing brain. TS in brain oscillations is highly heritable, and age-related change in TS is also heritable in selected brain areas. These results suggest that high and low attention problems subjects are at different stages of brain development. Copyright © 2016. Published by Elsevier B.V.

  1. GA-optimization for rapid prototype system demonstration

    NASA Technical Reports Server (NTRS)

    Kim, Jinwoo; Zeigler, Bernard P.

    1994-01-01

    An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.

  2. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  3. High-performance liquid chromatography analysis of plant saponins: An update 2005-2010

    PubMed Central

    Negi, Jagmohan S.; Singh, Pramod; Pant, Geeta Joshi Nee; Rawat, M. S. M.

    2011-01-01

    Saponins are widely distributed in plant kingdom. In view of their wide range of biological activities and occurrence as complex mixtures, saponins have been purified and separated by high-performance liquid chromatography using reverse-phase columns at lower wavelength. Mostly, saponins are not detected by ultraviolet detector due to lack of chromophores. Electrospray ionization mass spectrometry, diode array detector , evaporative light scattering detection, and charged aerosols have been used for overcoming the detection problem of saponins. PMID:22303089

  4. Graphical approach for multiple values logic minimization

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  5. A Literature Review - Problem Definition Studies on Selected Toxic Chemicals. Volume 3. Occupational Health and Safety Aspects of 2,4,6-Trinitrotoluene (TNT)

    DTIC Science & Technology

    1978-04-01

    crude liver extract, yellow bone marrow extract, a high vitamin (C and B complex) and high caloric diet . However, in spite of intensive treatment, the...protein-rich diet . . . . . 40 B. Effect of Other Food Additives . . . . . . . . . . . 40Si. Vitamin C . . . . . . ... . . . . . . . .. .. 40 ii...uncommon. Hassman (86) in 1971, in a review article about TNT reported that both hypo - and hyper-menorrhea occur in women exposed to TNT. f

  6. Enantioselective Organocatalytic α-Fluorination of Cyclic Ketones

    PubMed Central

    Kwiatkowski, Piotr; Beeson, Teresa D.; Conrad, Jay C.

    2011-01-01

    The first highly enantioselective α-fluorination of ketones using organocatalysis has been accomplished. The long-standing problem of enantioselective ketone α-fluorination via enamine activation has been overcome via high-throughput evaluation of a new library of amine catalysts. The optimal system, a primary amine functionalized Cinchona alkaloid, allows the direct and asymmetric α-fluorination of a variety of carbo- and heterocyclic substrates. Furthermore, this protocol also provides diastereo-, regio- and chemoselective catalyst control in fluorinations involving complex carbonyl systems. PMID:21247133

  7. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  8. From phase transitions to the topological renaissance. Comment on "Topodynamics of metastable brains" by Arturo Tozzi et al.

    NASA Astrophysics Data System (ADS)

    Somogyvári, Zoltán; Érdi, Péter

    2017-07-01

    The neural topodynamics theory of Tozzi et al. [13] has two main foci: metastable brain dynamics and the topological approach based on the Borsuk-Ulam theorem (BUT). Briefly, metastable brain dynamics theory hypothesizes that temporary stable synchronization and desynchronization of large number of individual dynamical systems, formed by local neural circuits, are responsible for coding of complex concepts in the brain and sudden changes of these synchronization patterns correspond to operational steps. But what dynamical network could form the substrate for this metastable dynamics, capable of entering into a combinatorially high number of metastable synchronization patterns and exhibit rapid transient changes between them? The general problem is related to the discrimination between ;Black Swans; and ;Dragon Kings;. While BSs are related to the theory of self-organized criticality, and suggests that high-impact extreme events are unpredictable, Dragon-kings are associated with the occurrence of a phase transition, whose emergent organization is based on intermittent criticality [9]. Widening the limits of predictability is one of the big open problems in the theory and practice of complex systems (Sect. 9.3 of Érdi [2]).

  9. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  10. Weak task-related modulation and stimulus representations during arithmetic problem solving in children with developmental dyscalculia

    PubMed Central

    Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Tenison, Caitlin; Menon, Vinod

    2015-01-01

    Developmental dyscalculia (DD) is a disability that impacts math learning and skill acquisition in school-age children. Here we investigate arithmetic problem solving deficits in young children with DD using univariate and multivariate analysis of fMRI data. During fMRI scanning, 17 children with DD (ages 7–9, grades 2 and 3) and 17 IQ- and reading ability-matched typically developing (TD) children performed complex and simple addition problems which differed only in arithmetic complexity. While the TD group showed strong modulation of brain responses with increasing arithmetic complexity, children with DD failed to show such modulation. Children with DD showed significantly reduced activation compared to TD children in the intraparietal sulcus, superior parietal lobule, supramarginal gyrus and bilateral dorsolateral prefrontal cortex in relation to arithmetic complexity. Critically, multivariate representational similarity revealed that brain response patterns to complex and simple problems were less differentiated in the DD group in bilateral anterior IPS, independent of overall differences in signal level. Taken together, these results show that children with DD not only under-activate key brain regions implicated in mathematical cognition, but they also fail to generate distinct neural responses and representations for different arithmetic problems. Our findings provide novel insights into the neural basis of DD. PMID:22682904

  11. Weak task-related modulation and stimulus representations during arithmetic problem solving in children with developmental dyscalculia.

    PubMed

    Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Tenison, Caitlin; Menon, Vinod

    2012-02-15

    Developmental dyscalculia (DD) is a disability that impacts math learning and skill acquisition in school-age children. Here we investigate arithmetic problem solving deficits in young children with DD using univariate and multivariate analysis of fMRI data. During fMRI scanning, 17 children with DD (ages 7-9, grades 2 and 3) and 17 IQ- and reading ability-matched typically developing (TD) children performed complex and simple addition problems which differed only in arithmetic complexity. While the TD group showed strong modulation of brain responses with increasing arithmetic complexity, children with DD failed to show such modulation. Children with DD showed significantly reduced activation compared to TD children in the intraparietal sulcus, superior parietal lobule, supramarginal gyrus and bilateral dorsolateral prefrontal cortex in relation to arithmetic complexity. Critically, multivariate representational similarity revealed that brain response patterns to complex and simple problems were less differentiated in the DD group in bilateral anterior IPS, independent of overall differences in signal level. Taken together, these results show that children with DD not only under-activate key brain regions implicated in mathematical cognition, but they also fail to generate distinct neural responses and representations for different arithmetic problems. Our findings provide novel insights into the neural basis of DD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. An Interview with Matthew P. Greving, PhD. Interview by Vicki Glaser.

    PubMed

    Greving, Matthew P

    2011-10-01

    Matthew P. Greving is Chief Scientific Officer at Nextval Inc., a company founded in early 2010 that has developed a discovery platform called MassInsight™.. He received his PhD in Biochemistry from Arizona State University, and prior to that he spent nearly 7 years working as a software engineer. This experience in solving complex computational problems fueled his interest in developing technologies and algorithms related to acquisition and analysis of high-dimensional biochemical data. To address the existing problems associated with label-based microarray readouts, he beganwork on a technique for label-free mass spectrometry (MS) microarray readout compatible with both matrix-assisted laser/desorption ionization (MALDI) and matrix-free nanostructure initiator mass spectrometry (NIMS). This is the core of Nextval’s MassInsight technology, which utilizes picoliter noncontact deposition of high-density arrays on mass-readout substrates along with computational algorithms for high-dimensional data processingand reduction.

  13. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets

    PubMed Central

    Plaku, Erion; Kavraki, Lydia E.

    2009-01-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  14. An outer approximation method for the road network design problem

    PubMed Central

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well. PMID:29590111

  15. Polygenic scores predict alcohol problems in an independent sample and show moderation by the environment.

    PubMed

    Salvatore, Jessica E; Aliev, Fazil; Edwards, Alexis C; Evans, David M; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J; Kaprio, Jaakko; Dick, Danielle M

    2014-04-10

    Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems-derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)-predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07-0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.

  16. An outer approximation method for the road network design problem.

    PubMed

    Asadi Bagloee, Saeed; Sarvi, Majid

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well.

  17. Fighting On All Fronts: A Critical Review Of The US Strategy Against ISIL

    DTIC Science & Technology

    2016-05-26

    developing a base sense of the sheer complexity. The Shia led Iraqi government has exacerbated tensions with the Sunnis through its heavy-handedness...only a part. In effect, only the symptom of a problem is being addressed instead of the getting at the core of the problem . Looking at ISIL through ...13 Solving the Right Problem : Framing ISIL Through Complexity Science

  18. Group Planning and Task Efficiency with Complex Problems. Final Report.

    ERIC Educational Resources Information Center

    Lawson, E. D.

    One hundred eighty 4-man groups (90 of men and 90 of women) using 3 types of net (All-Channel, Wheel and Circle) under 3 conditions (Planning Period (PP), Rest Period (RP) and Control) were run in a single session with 5 complex problems to determine whether a single 2-minute planning period after solution of the first problem would result in…

  19. Embodied Interactions in Human-Machine Decision Making for Situation Awareness Enhancement Systems

    DTIC Science & Technology

    2016-06-09

    characterize differences in spatial navigation strategies in a complex task, the Traveling Salesman Problem (TSP). For the second year, we developed...visual processing, leading to better solutions for spatial optimization problems . I will develop a framework to determine which body expressions best...methods include systematic characterization of gestures during complex problem solving. 15. SUBJECT TERMS Embodied interaction, gestures, one-shot

  20. Determinants of High-School Dropout: A Longitudinal Study in a Deprived Area of Japan.

    PubMed

    Tabuchi, Takahiro; Fujihara, Sho; Shinozaki, Tomohiro; Fukuhara, Hiroyuki

    2018-05-19

    Our objective in this study was to find determinants of high-school dropout in a deprived area of Japan using longitudinal data, including socio-demographic and junior high school-period information. We followed 695 students who graduated the junior high school located in a deprived area of Japan between 2002 and 2010 for 3 years after graduation (614 students: follow-up rate, 88.3%). Multivariable log-binomial regression models were used to calculate the prevalence ratios (PRs) for high-school dropout, using multiple imputation (MI) to account for non-response at follow-up. The MI model estimated that 18.7% of students dropped out of high school in approximately 3 years. In the covariates-adjusted model, three factors were significantly associated with high-school dropout: ≥10 days of tardy arrival in junior high school (PR 6.44; 95% confidence interval [CI], 1.69-24.6 for "10-29 days of tardy arrival" and PR 8.01; 95% CI, 2.05-31.3 for "≥30 days of tardy arrival" compared with "0 day of tardy arrival"), daily smoking (PR 2.01; 95% CI, 1.41-2.86) and severe problems, such as abuse and neglect (PR 1.66; 95% CI, 1.16-2.39). Among students with ≥30 days of tardy arrival in addition to daily smoking or experience of severe problems, ≥50% high-school dropout rates were observed. Three determinants of high-school dropout were found: smoking, tardy arrival, and experience of severe problems. These factors were correlated and should be treated as warning signs of complex behavioral and academic problems. Parents, educators, and policy makers should work together to implement effective strategies to prevent school dropout.

Top