Sample records for extreme-scale problem solving

  1. Neighboring extremals of dynamic optimization problems with path equality constraints

    NASA Technical Reports Server (NTRS)

    Lee, A. Y.

    1988-01-01

    Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, Edmond

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  3. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  4. Axions, inflation and the anthropic principle

    NASA Astrophysics Data System (ADS)

    Mack, Katherine J.

    2011-07-01

    The QCD axion is the leading solution to the strong-CP problem, a dark matter candidate, and a possible result of string theory compactifications. However, for axions produced before inflation, symmetry-breaking scales of fagtrsim1012 GeV (which are favored in string-theoretic axion models) are ruled out by cosmological constraints unless both the axion misalignment angle θ0 and the inflationary Hubble scale HI are extremely fine-tuned. We show that attempting to accommodate a high-fa axion in inflationary cosmology leads to a fine-tuning problem that is worse than the strong-CP problem the axion was originally invented to solve. We also show that this problem remains unresolved by anthropic selection arguments commonly applied to the high-fa axion scenario.

  5. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  6. Which Extreme Variant of the Problem-Solving Method of Teaching Should Be More Characteristic of the Many Teacher Variations of Problem-Solving Teaching?

    ERIC Educational Resources Information Center

    Mahan, Luther A.

    1970-01-01

    Compares the effects of two problem-solving teaching approaches. Lower ability students in an activity group demonstrated superior growth in basic science understanding, &roblem-solving skills, science interests, personal adjustment, and school attitudes. Neither method favored cognitive learning by higher ability students. (PR)

  7. The Prospects of Accounting at Mining Enterprises as a Factor of Ensuring their Sustainable Development

    NASA Astrophysics Data System (ADS)

    Tyuleneva, Tatiana

    2017-11-01

    One of the problems of sustainable development of mining companies is attracting additional investment. To solve it requires access to international capital markets, in this context, enterprises need to prepare financial statements with international requirements based on the data generated by the accounting system. The article considers the basic problems of accounting in the extractive industries due to the nature of the industry, as well as evaluation of the completeness of their solution in the framework of international financial reporting standards. In addition, lists the characteristics of accounting for mining industry, due to the peculiarities of the production process that need to be considered to solve these problems. This sector is extremely important for individual countries and on a global scale.

  8. Target matching based on multi-view tracking

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Zhou, Changsheng

    2011-01-01

    A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.

  9. The Reliability and Construct Validity of Scores on the Attitudes toward Problem Solving Scale

    ERIC Educational Resources Information Center

    Zakaria, Effandi; Haron, Zolkepeli; Daud, Md Yusoff

    2004-01-01

    The Attitudes Toward Problem Solving Scale (ATPSS) has received limited attention concerning its reliability and validity with a Malaysian secondary education population. Developed by Charles, Lester & O'Daffer (1987), the instruments assessed attitudes toward problem solving in areas of Willingness to Engage in Problem Solving Activities,…

  10. Axions, inflation and the anthropic principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mack, Katherine J., E-mail: mack@ast.cam.ac.uk

    2011-07-01

    The QCD axion is the leading solution to the strong-CP problem, a dark matter candidate, and a possible result of string theory compactifications. However, for axions produced before inflation, symmetry-breaking scales of f{sub a}∼>10{sup 12} GeV (which are favored in string-theoretic axion models) are ruled out by cosmological constraints unless both the axion misalignment angle θ{sub 0} and the inflationary Hubble scale H{sub I} are extremely fine-tuned. We show that attempting to accommodate a high-f{sub a} axion in inflationary cosmology leads to a fine-tuning problem that is worse than the strong-CP problem the axion was originally invented to solve. Wemore » also show that this problem remains unresolved by anthropic selection arguments commonly applied to the high-f{sub a} axion scenario.« less

  11. Case management services for work related upper extremity disorders. Integrating workplace accommodation and problem solving.

    PubMed

    Shaw, W S; Feuerstein, M; Lincoln, A E; Miller, V I; Wood, P M

    2001-08-01

    A case manager's ability to obtain worksite accommodations and engage workers in active problem solving may improve health and return to work outcomes for clients with work related upper extremity disorders (WRUEDs). This study examines the feasibility of a 2 day training seminar to help nurse case managers identify ergonomic risk factors, provide accommodation, and conduct problem solving skills training with workers' compensation claimants recovering from WRUEDs. Eight procedural steps to this case management approach were identified, translated into a training workshop format, and conveyed to 65 randomly selected case managers. Results indicate moderate to high self ratings of confidence to perform ergonomic assessments (mean = 7.5 of 10) and to provide problem solving skills training (mean = 7.2 of 10) after the seminar. This training format was suitable to experienced case managers and generated a moderate to high level of confidence to use this case management approach.

  12. Improving extreme-scale problem solving: assessing electronic brainstorming effectiveness in an industrial setting.

    PubMed

    Dornburg, Courtney C; Stevens, Susan M; Hendrickson, Stacey M L; Davidson, George S

    2009-08-01

    An experiment was conducted to compare the effectiveness of individual versus group electronic brainstorming to address difficult, real-world challenges. Although industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The present experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges during the course of 4 days. Employees and contractors at a national laboratory participated, either in a group setting or individually, in an electronic brainstorm to pose solutions to a real-world problem. The data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p < .05) outperformed the group. When quality is used to benchmark success, these data indicate that work-relevant challenges are better solved by aggregating electronic individual responses rather than by electronically convening a group. This research suggests that industrial reliance on electronic problem-solving groups should be tempered, and large nominal groups may be more appropriate corporate problem-solving vehicles.

  13. Side effects of problem-solving strategies in large-scale nutrition science: towards a diversification of health.

    PubMed

    Penders, Bart; Vos, Rein; Horstman, Klasien

    2009-11-01

    Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.

  14. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  15. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  16. Investigating the psychological resilience, self-confidence and problem-solving skills of midwife candidates.

    PubMed

    Ertekin Pinar, Sukran; Yildirim, Gulay; Sayin, Neslihan

    2018-05-01

    The high level of psychological resilience, self-confidence and problem solving skills of midwife candidates play an important role in increasing the quality of health care and in fulfilling their responsibilities towards patients. This study was conducted to investigate the psychological resilience, self-confidence and problem-solving skills of midwife candidates. It is a convenience descriptive quantitative study. Students who study at Health Sciences Faculty in Turkey's Central Anatolia Region. Midwife candidates (N = 270). In collection of data, the Personal Information Form, Psychological Resilience Scale for Adults (PRSA), Self-Confidence Scale (SCS), and Problem Solving Inventory (PSI) were used. There was a negatively moderate-level significant relationship between the Problem Solving Inventory scores and the Psychological Resilience Scale for Adults scores (r = -0.619; p = 0.000), and between Self-Confidence Scale scores (r = -0.524; p = 0.000). There was a positively moderate-level significant relationship between the Psychological Resilience Scale for Adults scores and the Self-Confidence Scale scores (r = 0.583; p = 0.000). There was a statistically significant difference (p < 0.05) between the Problem Solving Inventory and the Psychological Resilience Scale for Adults scores according to getting support in a difficult situation. As psychological resilience and self-confidence levels increase, problem-solving skills increase; additionally, as self-confidence increases, psychological resilience increases too. Psychological resilience, self-confidence, and problem-solving skills of midwife candidates in their first-year of studies are higher than those who are in their fourth year. Self-confidence and psychological resilience of midwife candidates aged between 17 and 21, self-confidence and problem solving skills of residents of city centers, psychological resilience of those who perceive their monthly income as sufficient are high. Psychological resilience and problem-solving skills for midwife candidates who receive social support are also high. The fact that levels of self-confidence, problem-solving skills and psychological resilience of fourth-year students are found to be low presents a situation that should be taken into consideration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Emotion dysregulation, problem-solving, and hopelessness.

    PubMed

    Vatan, Sevginar; Lester, David; Gunn, John F

    2014-04-01

    A sample of 87 Turkish undergraduate students was administered scales to measure hopelessness, problem-solving skills, emotion dysregulation, and psychiatric symptoms. All of the scores from these scales were strongly associated. In a multiple regression, hopelessness scores were predicted by poor problem-solving skills and emotion dysregulation.

  18. Inflationary dynamics for matrix eigenvalue problems

    PubMed Central

    Heller, Eric J.; Kaplan, Lev; Pollmann, Frank

    2008-01-01

    Many fields of science and engineering require finding eigenvalues and eigenvectors of large matrices. The solutions can represent oscillatory modes of a bridge, a violin, the disposition of electrons around an atom or molecule, the acoustic modes of a concert hall, or hundreds of other physical quantities. Often only the few eigenpairs with the lowest or highest frequency (extremal solutions) are needed. Methods that have been developed over the past 60 years to solve such problems include the Lanczos algorithm, Jacobi–Davidson techniques, and the conjugate gradient method. Here, we present a way to solve the extremal eigenvalue/eigenvector problem, turning it into a nonlinear classical mechanical system with a modified Lagrangian constraint. The constraint induces exponential inflationary growth of the desired extremal solutions. PMID:18511564

  19. ExM:System Support for Extreme-Scale, Many-Task Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Daniel S

    The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastestmore » computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)« less

  20. The Place and Purpose of Combinatorics

    ERIC Educational Resources Information Center

    Hurdle, Zach; Warshauer, Max; White, Alex

    2016-01-01

    The desire to persuade students to avoid strictly memorizing formulas is a recurring theme throughout discussions of curriculum and problem solving. In combinatorics, a branch of discrete mathematics, problems can be easy to write--identify a few categories, add a few restrictions, specify an outcome--yet extremely challenging to solve. A lesson…

  1. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  2. Symmetry Breaking, Unification, and Theories Beyond the Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Yasunori

    2009-07-31

    A model was constructed in which the supersymmetric fine-tuning problem is solved without extending the Higgs sector at the weak scale. We have demonstrated that the model can avoid all the phenomenological constraints, while avoiding excessive fine-tuning. We have also studied implications of the model on dark matter physics and collider physics. I have proposed in an extremely simple construction for models of gauge mediation. We found that the {mu} problem can be simply and elegantly solved in a class of models where the Higgs fields couple directly to the supersymmetry breaking sector. We proposed a new way of addressingmore » the flavor problem of supersymmetric theories. We have proposed a new framework of constructing theories of grand unification. We constructed a simple and elegant model of dark matter which explains excess flux of electrons/positrons. We constructed a model of dark energy in which evolving quintessence-type dark energy is naturally obtained. We studied if we can find evidence of the multiverse.« less

  3. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  4. Measuring health-related problem solving among African Americans with multiple chronic conditions: application of Rasch analysis.

    PubMed

    Fitzpatrick, Stephanie L; Hill-Briggs, Felicia

    2015-10-01

    Identification of patients with poor chronic disease self-management skills can facilitate treatment planning, determine effectiveness of interventions, and reduce disease complications. This paper describes the use of a Rasch model, the Rating Scale Model, to examine psychometric properties of the 50-item Health Problem-Solving Scale (HPSS) among 320 African American patients with high risk for cardiovascular disease. Items on the positive/effective HPSS subscales targeted patients at low, moderate, and high levels of positive/effective problem solving, whereas items on the negative/ineffective problem solving subscales mostly targeted those at moderate or high levels of ineffective problem solving. Validity was examined by correlating factor scores on the measure with clinical and behavioral measures. Items on the HPSS show promise in the ability to assess health-related problem solving among high risk patients. However, further revisions of the scale are needed to increase its usability and validity with large, diverse patient populations in the future.

  5. Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coon, Ethan; Porter, Mark L.; Kang, Qinjun

    2012-06-18

    In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less

  6. The Relationship between Students' Problem Posing and Problem Solving Abilities and Beliefs: A Small-Scale Study with Chinese Elementary School Children

    ERIC Educational Resources Information Center

    Limin, Chen; Van Dooren, Wim; Verschaffel, Lieven

    2013-01-01

    The goal of the present study is to investigate the relationship between pupils' problem posing and problem solving abilities, their beliefs about problem posing and problem solving, and their general mathematics abilities, in a Chinese context. Five instruments, i.e., a problem posing test, a problem solving test, a problem posing questionnaire,…

  7. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    PubMed

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  8. Moving Bodies, Building Minds: Foster Preschoolers' Critical Thinking and Problem Solving through Movement

    ERIC Educational Resources Information Center

    Marigliano, Michelle L.; Russo, Michele J.

    2011-01-01

    Creative movement is an ideal way to help young children develop critical-thinking and problem-solving skills. Most young children are, by nature, extremely physical. They delight in exploring the world with their bodies and expressing their ideas and feelings through movement. During creative movement experiences, children learn to think before…

  9. Classroom-tested Recommendations for Teaching Problem Solving within a Traditional College Course: Genetics.

    ERIC Educational Resources Information Center

    Smith, Mike U.

    Both teachers and students alike acknowledge that genetics and genetics problem-solving are extremely difficult to learn and to teach. Therefore, a number of recommendations for teaching college genetics are offered. Although few of these ideas have as yet been tested in controlled experiments, they are supported by research and experience and may…

  10. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  11. Identifying barriers to recovery from work related upper extremity disorders: use of a collaborative problem solving technique.

    PubMed

    Shaw, William S; Feuerstein, Michael; Miller, Virginia I; Wood, Patricia M

    2003-08-01

    Improving health and work outcomes for individuals with work related upper extremity disorders (WRUEDs) may require a broad assessment of potential return to work barriers by engaging workers in collaborative problem solving. In this study, half of all nurse case managers from a large workers' compensation system were randomly selected and invited to participate in a randomized, controlled trial of an integrated case management (ICM) approach for WRUEDs. The focus of ICM was problem solving skills training and workplace accommodation. Volunteer nurses attended a 2 day ICM training workshop including instruction in a 6 step process to engage clients in problem solving to overcome barriers to recovery. A chart review of WRUED case management reports (n = 70) during the following 2 years was conducted to extract case managers' reports of barriers to recovery and return to work. Case managers documented from 0 to 21 barriers per case (M = 6.24, SD = 4.02) within 5 domains: signs and symptoms (36%), work environment (27%), medical care (13%), functional limitations (12%), and coping (12%). Compared with case managers who did not receive the training (n = 67), workshop participants identified more barriers related to signs and symptoms, work environment, functional limitations, and coping (p < .05), but not to medical care. Problem solving skills training may help focus case management services on the most salient recovery factors affecting return to work.

  12. Crystallized verbal skills in schizophrenia: relationship to neurocognition, symptoms, and functional status.

    PubMed

    Kurtz, Matthew M; Donato, Jad; Rose, Jennifer

    2011-11-01

    To study the relationship of superior (i.e., ≥ 90th percentile), average (11th-89th percentile) or extremely low (i.e., ≤ 10th percentile) crystallized verbal skills to neurocognitive profiles, symptoms and everyday life function in schizophrenia. Crystallized verbal skill was derived from Vocabulary subtest scores from the Wechsler Adult Intelligence Scale (WAIS). Out of a sample of 165 stable outpatients with schizophrenia we identified 25 participants with superior crystallized verbal skill, 104 participants with average verbal skill, and 36 participants with extremely low crystallized verbal skill. Each participant was administered measures of attention, working memory, verbal learning and memory, problem-solving and processing speed, as well as symptom and performance-based adaptive life skill assessments. The magnitude of neuropsychological impairment across the three groups was different, after adjusting for group differences in education and duration of illness. Working memory, and verbal learning and memory skills were different across all three groups, while processing speed differentiated the extremely low verbal skill group from the other two groups and problem-solving differentiated the very low verbal skill group from the superior verbal skill group. There were no group differences in sustained attention. Capacity measures of everyday life skills were different across each of the three groups. Crystallized verbal skill in schizophrenia is related to the magnitude of impairment in neurocognitive function and performance-based skills in everyday life function. Patterns of neuropsychological impairment were similar across different levels of crystallized verbal skill.

  13. An Initial Model of Requirements Traceability an Empirical Study

    DTIC Science & Technology

    1992-09-22

    procedures have been used extensively in the study of human problem-solving, including such areas as general problem-solving behavior, physics problem...heen doing unless you have traceability." " Humans don’t go back to the requirements enough." "Traceabi!ity should be extremely helpful with...by constraints on its usage: ("Traceability needs to be something that humans can work with, not just a whip held over people." "Traceability should

  14. Semi-supervised tracking of extreme weather events in global spatio-temporal climate datasets

    NASA Astrophysics Data System (ADS)

    Kim, S. K.; Prabhat, M.; Williams, D. N.

    2017-12-01

    Deep neural networks have been successfully applied to solve problem to detect extreme weather events in large scale climate datasets and attend superior performance that overshadows all previous hand-crafted methods. Recent work has shown that multichannel spatiotemporal encoder-decoder CNN architecture is able to localize events in semi-supervised bounding box. Motivated by this work, we propose new learning metric based on Variational Auto-Encoders (VAE) and Long-Short-Term-Memory (LSTM) to track extreme weather events in spatio-temporal dataset. We consider spatio-temporal object tracking problems as learning probabilistic distribution of continuous latent features of auto-encoder using stochastic variational inference. For this, we assume that our datasets are i.i.d and latent features is able to be modeled by Gaussian distribution. In proposed metric, we first train VAE to generate approximate posterior given multichannel climate input with an extreme climate event at fixed time. Then, we predict bounding box, location and class of extreme climate events using convolutional layers given input concatenating three features including embedding, sampled mean and standard deviation. Lastly, we train LSTM with concatenated input to learn timely information of dataset by recurrently feeding output back to next time-step's input of VAE. Our contribution is two-fold. First, we show the first semi-supervised end-to-end architecture based on VAE to track extreme weather events which can apply to massive scaled unlabeled climate datasets. Second, the information of timely movement of events is considered for bounding box prediction using LSTM which can improve accuracy of localization. To our knowledge, this technique has not been explored neither in climate community or in Machine Learning community.

  15. Rosetta Structure Prediction as a Tool for Solving Difficult Molecular Replacement Problems.

    PubMed

    DiMaio, Frank

    2017-01-01

    Molecular replacement (MR), a method for solving the crystallographic phase problem using phases derived from a model of the target structure, has proven extremely valuable, accounting for the vast majority of structures solved by X-ray crystallography. However, when the resolution of data is low, or the starting model is very dissimilar to the target protein, solving structures via molecular replacement may be very challenging. In recent years, protein structure prediction methodology has emerged as a powerful tool in model building and model refinement for difficult molecular replacement problems. This chapter describes some of the tools available in Rosetta for model building and model refinement specifically geared toward difficult molecular replacement cases.

  16. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  17. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  18. Factors affecting the social problem-solving ability of baccalaureate nursing students.

    PubMed

    Lau, Ying

    2014-01-01

    The hospital environment is characterized by time pressure, uncertain information, conflicting goals, high stakes, stress, and dynamic conditions. These demands mean there is a need for nurses with social problem-solving skills. This study set out to (1) investigate the social problem-solving ability of Chinese baccalaureate nursing students in Macao and (2) identify the association between communication skill, clinical interaction, interpersonal dysfunction, and social problem-solving ability. All nursing students were recruited in one public institute through the census method. The research design was exploratory, cross-sectional, and quantitative. The study used the Chinese version of the Social Problem Solving Inventory short form (C-SPSI-R), Communication Ability Scale (CAS), Clinical Interactive Scale (CIS), and Interpersonal Dysfunction Checklist (IDC). Macao nursing students were more likely to use the two constructive or adaptive dimensions rather than the three dysfunctional dimensions of the C-SPSI-R to solve their problems. Multiple linear regression analysis revealed that communication ability (ß=.305, p<.0001), clinical interaction (ß=.129, p=.047), and interpersonal dysfunction (ß=-.402, p<.0001) were associated with social problem-solving after controlling for covariates. Macao has had no problem-solving training in its educational curriculum; an effective problem-solving training should be implemented as part of the curriculum. With so many changes in healthcare today, nurses must be good social problem-solvers in order to deliver holistic care. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. How to Solve Polyhedron Problem?

    NASA Astrophysics Data System (ADS)

    Wijayanti, A.; Kusumah, Y. S.; Suhendra

    2017-09-01

    The purpose of this research is to know the possible strategies to solve the problem in polyhedron topic with Knilsey’s Learning Model as scaffolding for the student. This research was conducted by using mixed method with sequential explanatory design. Researchers used purposive sampling technique to get two classes for Knisley class and conventional class and an extreme case sampling technique to get interview data. The instruments used are tests, observation sheets and interview guidelines. The result of the research shows that: (1) students’ strategies to solve polyhedron problem were grouped into two steps: by partitioning the problem to find out the solution and make a mathematical model of the mathematical sentence given and then connect it with the concept that the students already know; (2) students ‘mathematical problem solving ability in Knisley class is higher than those in conventional class.

  20. Physical activity problem-solving inventory for adolescents: development and initial validation.

    PubMed

    Thompson, Debbe; Bhatt, Riddhi; Watson, Kathy

    2013-08-01

    Youth encounter physical activity barriers, often called problems. The purpose of problem solving is to generate solutions to overcome the barriers. Enhancing problem-solving ability may enable youth to be more physically active. Therefore, a method for reliably assessing physical activity problem-solving ability is needed. The purpose of this research was to report the development and initial validation of the physical activity problem-solving inventory for adolescents (PAPSIA). Qualitative and quantitative procedures were used. The social problem-solving inventory for adolescents guided the development of the PAPSIA scale. Youth (14- to 17-year-olds) were recruited using standard procedures, such as distributing flyers in the community and to organizations likely to be attended by adolescents. Cognitive interviews were conducted in person. Adolescents completed pen and paper versions of the questionnaire and/or scales assessing social desirability, self-reported physical activity, and physical activity self-efficacy. An expert panel review, cognitive interviews, and a pilot study (n = 129) established content validity. Construct, concurrent, and predictive validity were also established (n = 520 youth). PAPSIA is a promising measure for assessing youth physical activity problem-solving ability. Future research will assess its validity with objectively measured physical activity.

  1. The relationship between family functioning and the crime types in incarcerated children.

    PubMed

    Teker, Kamil; Topçu, Seda; Başkan, Sevgi; Orhon, Filiz Ş; Ulukol, Betül

    2017-06-01

    We investigated the relationship between the family functioning and crime types in incarcerated children. One hundred eighty two incarcerated children aged between 13-18 years who were confined in child-youth prisons and child correctional facilities were enrolled into this descriptive study. Participants completed demographic questions and the McMaster Family Assessment Device (Epstein, Baldwin, & Bishop, 1983) (FAD) with face to face interviews. The crime types were theft, assault (bodily injury), robbery, sexual assault, drug trafficker and murder. The socio-demographic characteristics were compared by using FAD scale, and growing up in a nuclear family had statistically significant better scores for problem solving and communication subscales and the children whose parents had their own house had significantly better problem solving scores When we compared the crime types of children by using problem solving, communication and general functioning subscales of FAD, we found statistical lower scores in assault (bodily injury) group than in theft, sexual assault, murder groups and in drug trafficker group than in murder group, also we found lower scores in drug trafficker group than in theft group for problem solving and general functioning sub-scales, also there were lower scores in bodily injury assault group than in robbery, theft groups and in drug trafficker than in theft group for problem solving subscale. The communication and problem solving sub-scales of FAD are firstly impaired scales for the incarcerated children. We mention these sub-scales are found with unplanned and less serious crimes and commented those as cry for help of the children.

  2. A Case Study in an Integrated Development and Problem Solving Environment

    ERIC Educational Resources Information Center

    Deek, Fadi P.; McHugh, James A.

    2003-01-01

    This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…

  3. Contribution of problem-solving skills to fear of recurrence in breast cancer survivors.

    PubMed

    Akechi, Tatuo; Momino, Kanae; Yamashita, Toshinari; Fujita, Takashi; Hayashi, Hironori; Tsunoda, Nobuyuki; Iwata, Hiroji

    2014-05-01

    Although fear of recurrence is a major concern among breast cancer survivors after surgery, no standard strategies exist that alleviate their distress. This study examined the association of patients' problem-solving skills and fear of recurrence and psychological distress among breast cancer survivors. Randomly selected, ambulatory, female patients with breast cancer participated in this study. They were asked to complete the Concerns about Recurrence Scale (CARS) and the Hospital Anxiety and Depression Scale. Multiple regression analyses were used to examine their associations. Data were obtained from 317 patients. Patients' problem-solving skills were significantly associated with all subscales of fear of recurrence and overall worries measured by the CARS. In addition, patients' problem-solving skills were significantly associated with both their anxiety and depression. Our findings warrant clinical trials to investigate effectiveness of psychosocial intervention program, including enhancing patients' problem-solving skills and reducing fear of recurrence among breast cancer survivors.

  4. Does Problem-Solving Training for Family Caregivers Benefit Their Care Recipients With Severe Disabilities? A Latent Growth Model of the Project CLUES Randomized Clinical Trial

    PubMed Central

    Berry, Jack W.; Elliott, Timothy R.; Grant, Joan S.; Edwards, Gary; Fine, Philip R.

    2012-01-01

    Objective To examine whether an individualized problem-solving intervention provided to family caregivers of persons with severe disabilities provides benefits to both caregivers and their care recipients. Design Family caregivers were randomly assigned to an education-only control group or a problem-solving training (PST) intervention group. Participants received monthly contacts for 1 year. Participants Family caregivers (129 women, 18 men) and their care recipients (81 women, 66 men) consented to participate. Main Outcome Measures Caregivers completed the Social Problem-Solving Inventory–Revised, the Center for Epidemiological Studies-Depression scale, the Satisfaction with Life scale, and a measure of health complaints at baseline and in 3 additional assessments throughout the year. Care recipient depression was assessed with a short form of the Hamilton Depression Scale. Results Latent growth modeling was used to analyze data from the dyads. Caregivers who received PST reported a significant decrease in depression over time, and they also displayed gains in constructive problem-solving abilities and decreases in dysfunctional problem-solving abilities. Care recipients displayed significant decreases in depression over time, and these decreases were significantly associated with decreases in caregiver depression in response to training. Conclusions PST significantly improved the problem-solving skills of community-residing caregivers and also lessened their depressive symptoms. Care recipients in the PST group also had reductions in depression over time, and it appears that decreases in caregiver depression may account for this effect. PMID:22686549

  5. Does problem-solving training for family caregivers benefit their care recipients with severe disabilities? A latent growth model of the Project CLUES randomized clinical trial.

    PubMed

    Berry, Jack W; Elliott, Timothy R; Grant, Joan S; Edwards, Gary; Fine, Philip R

    2012-05-01

    To examine whether an individualized problem-solving intervention provided to family caregivers of persons with severe disabilities provides benefits to both caregivers and their care recipients. Family caregivers were randomly assigned to an education-only control group or a problem-solving training (PST) intervention group. Participants received monthly contacts for 1 year. Family caregivers (129 women, 18 men) and their care recipients (81 women, 66 men) consented to participate. Caregivers completed the Social Problem-Solving Inventory-Revised, the Center for Epidemiological Studies-Depression scale, the Satisfaction with Life scale, and a measure of health complaints at baseline and in 3 additional assessments throughout the year. Care recipient depression was assessed with a short form of the Hamilton Depression Scale. Latent growth modeling was used to analyze data from the dyads. Caregivers who received PST reported a significant decrease in depression over time, and they also displayed gains in constructive problem-solving abilities and decreases in dysfunctional problem-solving abilities. Care recipients displayed significant decreases in depression over time, and these decreases were significantly associated with decreases in caregiver depression in response to training. PST significantly improved the problem-solving skills of community-residing caregivers and also lessened their depressive symptoms. Care recipients in the PST group also had reductions in depression over time, and it appears that decreases in caregiver depression may account for this effect. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  6. Building Flexible User Interfaces for Solving PDEs

    NASA Astrophysics Data System (ADS)

    Logg, Anders; Wells, Garth N.

    2010-09-01

    FEniCS is a collection of software tools for the automated solution of differential equations by finite element methods. In this note, we describe how FEniCS can be used to solve a simple nonlinear model problem with varying levels of automation. At one extreme, FEniCS provides tools for the fully automated and adaptive solution of nonlinear partial differential equations. At the other extreme, FEniCS provides a range of tools that allow the computational scientist to experiment with novel solution algorithms.

  7. Adaptation of Social Problem Solving for Children Questionnaire in 6 Age Groups and its Relationships with Preschool Behavior Problems

    ERIC Educational Resources Information Center

    Dereli-Iman, Esra

    2013-01-01

    Social Problem Solving for Child Scale is frequently used to determine behavioral problems of children with their own word and to identify ways of conflict encountered in daily life, and interpersonal relationships in abroad. The primary purpose of this study was to adapt the Wally Child Social Problem-Solving Detective Game Test. In order to…

  8. Dispositional Insight Scale: Development and Validation of a Tool That Measures Propensity toward Insight in Problem Solving

    ERIC Educational Resources Information Center

    Ovington, Linda A.; Saliba, Anthony J.; Goldring, Jeremy

    2016-01-01

    This article reports the development of a brief self-report measure of dispositional insight problem solving, the Dispositional Insight Scale (DIS). From a representative Australian database, 1,069 adults (536 women and 533 men) completed an online questionnaire. An exploratory and confirmatory factor analysis revealed a 5-item scale, with all…

  9. Fidelity of Problem Solving in Everyday Practice: Typical Training May Miss the Mark

    ERIC Educational Resources Information Center

    Ruby, Susan F.; Crosby-Cooper, Tricia; Vanderwood, Michael L.

    2011-01-01

    With national attention on scaling up the implementation of Response to Intervention, problem solving teams remain one of the central components for development, implementation, and monitoring of school-based interventions. Studies have shown that problem solving teams evidence a sound theoretical base and demonstrated efficacy; however, limited…

  10. The nonequilibrium quantum many-body problem as a paradigm for extreme data science

    NASA Astrophysics Data System (ADS)

    Freericks, J. K.; Nikolić, B. K.; Frieder, O.

    2014-12-01

    Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.

  11. Personality, problem solving, and adolescent substance use.

    PubMed

    Jaffee, William B; D'Zurilla, Thomas J

    2009-03-01

    The major aim of this study was to examine the role of social problem solving in the relationship between personality and substance use in adolescents. Although a number of studies have identified a relationship between personality and substance use, the precise mechanism by which this occurs is not clear. We hypothesized that problem-solving skills could be one such mechanism. More specifically, we sought to determine whether problem solving mediates, moderates, or both mediates and moderates the relationship between different personality traits and substance use. Three hundred and seven adolescents were administered the Substance Use Profile Scale, the Social Problem-Solving Inventory-Revised, and the Personality Experiences Inventory to assess personality, social problem-solving ability, and substance use, respectively. Results showed that the dimension of rational problem solving (i.e., effective problem-solving skills) significantly mediated the relationship between hopelessness and lifetime alcohol and marijuana use. The theoretical and clinical implications of these results were discussed.

  12. Social problem-solving in Chinese baccalaureate nursing students.

    PubMed

    Fang, Jinbo; Luo, Ying; Li, Yanhua; Huang, Wenxia

    2016-11-01

    To describe social problem solving in Chinese baccalaureate nursing students. A descriptive cross-sectional study was conducted with a cluster sample of 681 Chinese baccalaureate nursing students. The Chinese version of the Social Problem-Solving scale was used. Descriptive analyses, independent t-test and one-way analysis of variance were applied to analyze the data. The final year nursing students presented the highest scores of positive social problem-solving skills. Students with experiences of self-directed and problem-based learning presented significantly higher scores in Positive Problem Orientation subscale. The group with Critical thinking training experience, however, displayed higher negative problem solving scores compared with nonexperience group. Social problem solving abilities varied based upon teaching-learning strategies. Self-directed and problem-based learning may be recommended as effective way to improve social problem-solving ability. © 2016 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  13. Extremal Optimization for Quadratic Unconstrained Binary Problems

    NASA Astrophysics Data System (ADS)

    Boettcher, S.

    We present an implementation of τ-EO for quadratic unconstrained binary optimization (QUBO) problems. To this end, we transform modify QUBO from its conventional Boolean presentation into a spin glass with a random external field on each site. These fields tend to be rather large compared to the typical coupling, presenting EO with a challenging two-scale problem, exploring smaller differences in couplings effectively while sufficiently aligning with those strong external fields. However, we also find a simple solution to that problem that indicates that those external fields apparently tilt the energy landscape to a such a degree such that global minima become more easy to find than those of spin glasses without (or very small) fields. We explore the impact of the weight distribution of the QUBO formulation in the operations research literature and analyze their meaning in a spin-glass language. This is significant because QUBO problems are considered among the main contenders for NP-hard problems that could be solved efficiently on a quantum computer such as D-Wave.

  14. The Investigation of Social Problem Solving Abilities of University Students in Terms of Perceived Social Support

    ERIC Educational Resources Information Center

    Tras, Zeliha

    2013-01-01

    The purpose of this study is to analyze of university students' perceived social support and social problem solving. The participants were 827 (474 female and 353 male) university students. Data were collected Perceived Social Support Scale-Revised (Yildirim, 2004) and Social Problem Solving (Maydeu-Olivares and D'Zurilla, 1996) translated and…

  15. Learning Analysis of K-12 Students' Online Problem Solving: A Three-Stage Assessment Approach

    ERIC Educational Resources Information Center

    Hu, Yiling; Wu, Bian; Gu, Xiaoqing

    2017-01-01

    Problem solving is considered a fundamental human skill. However, large-scale assessment of problem solving in K-12 education remains a challenging task. Researchers have argued for the development of an enhanced assessment approach through joint effort from multiple disciplines. In this study, a three-stage approach based on an evidence-centered…

  16. Tribal Colleges: The Original Extreme Makeover Experts

    ERIC Educational Resources Information Center

    Powless, Donna

    2015-01-01

    In this article, the author states "our experience with education is a prime example in proving we are experts at problem-solving and are the originators of the extreme makeover." Educational institutions were introduced to the Native people in an outrageous manner--often as a mask for assimilating American Indians, routinely resulting…

  17. The relationship between mathematical problem-solving skills and self-regulated learning through homework behaviours, motivation, and metacognition

    NASA Astrophysics Data System (ADS)

    Çiğdem Özcan, Zeynep

    2016-04-01

    Studies highlight that using appropriate strategies during problem solving is important to improve problem-solving skills and draw attention to the fact that using these skills is an important part of students' self-regulated learning ability. Studies on this matter view the self-regulated learning ability as key to improving problem-solving skills. The aim of this study is to investigate the relationship between mathematical problem-solving skills and the three dimensions of self-regulated learning (motivation, metacognition, and behaviour), and whether this relationship is of a predictive nature. The sample of this study consists of 323 students from two public secondary schools in Istanbul. In this study, the mathematics homework behaviour scale was administered to measure students' homework behaviours. For metacognition measurements, the mathematics metacognition skills test for students was administered to measure offline mathematical metacognitive skills, and the metacognitive experience scale was used to measure the online mathematical metacognitive experience. The internal and external motivational scales used in the Programme for International Student Assessment (PISA) test were administered to measure motivation. A hierarchic regression analysis was conducted to determine the relationship between the dependent and independent variables in the study. Based on the findings, a model was formed in which 24% of the total variance in students' mathematical problem-solving skills is explained by the three sub-dimensions of the self-regulated learning model: internal motivation (13%), willingness to do homework (7%), and post-problem retrospective metacognitive experience (4%).

  18. Associations of Patient Health-Related Problem Solving with Disease Control, Emergency Department Visits, and Hospitalizations in HIV and Diabetes Clinic Samples

    PubMed Central

    Gemmell, Leigh; Kulkarni, Babul; Klick, Brendan; Brancati, Frederick L.

    2007-01-01

    Background Patient problem solving and decision making are recognized as essential to effective self-management across multiple chronic diseases. However, a health-related problem-solving instrument that demonstrates sensitivity to disease control parameters in multiple diseases has not been established. Objectives To determine, in two disease samples, internal consistency and associations with disease control of the Health Problem-Solving Scale (HPSS), a 50-item measure with 7 subscales assessing effective and ineffective problem-solving approaches, learning from past experiences, and motivation/orientation. Design Cross-sectional study. Participants Outpatients from university-affiliated medical center HIV (N = 111) and diabetes mellitus (DM, N = 78) clinics. Measurements HPSS, CD4, hemoglobin A1c (HbA1c), and number of hospitalizations in the previous year and Emergency Department (ED) visits in the previous 6 months. Results Administration time for the HPSS ranged from 5 to 10 minutes. Cronbach’s alpha for the total HPSS was 0.86 and 0.89 for HIV and DM, respectively. Higher total scores (better problem solving) were associated with higher CD4 and fewer hospitalizations in HIV and lower HbA1c and fewer ED visits in DM. Health Problem-Solving Scale subscales representing negative problem-solving approaches were consistently associated with more hospitalizations (HIV, DM) and ED visits (DM). Conclusions The HPSS may identify problem-solving difficulties with disease self-management and assess effectiveness of interventions targeting patient decision making in self-care. PMID:17443373

  19. The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems.

    PubMed

    Baars, Martine; Wijnia, Lisette; Paas, Fred

    2017-01-01

    Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way.

  20. Piracy and Armed Robbery in the Malacca Strait: A Problem Solved

    DTIC Science & Technology

    2009-01-01

    PIRACY AND ARMED ROBBERY IN THE MALACCA STRAIT A Problem Solved? Catherine Zara Raymond The Malacca Strait is a narrow waterway that extends nearly...waterway is extremely small. With statistics such as these, one might wonder why we are still seeing the publication of articles such Catherine Zara Raymond...Shrivenham, United Kingdom. She is also a PhD student at King’s College London. Previ- ously, Zara worked as an analyst for the security consul- tancy

  1. Two-dimensional radiative transfer. I - Planar geometry. [in stellar atmospheres

    NASA Technical Reports Server (NTRS)

    Mihalas, D.; Auer, L. H.; Mihalas, B. R.

    1978-01-01

    Differential-equation methods for solving the transfer equation in two-dimensional planar geometries are developed. One method, which uses a Hermitian integration formula on ray segments through grid points, proves to be extremely well suited to velocity-dependent problems. An efficient elimination scheme is developed for which the computing time scales linearly with the number of angles and frequencies; problems with large velocity amplitudes can thus be treated accurately. A very accurate and efficient method for performing a formal solution is also presented. A discussion is given of several examples of periodic media and free-standing slabs, both in static cases and with velocity fields. For the free-standing slabs, two-dimensional transport effects are significant near boundaries, but no important effects were found in any of the periodic cases studied.

  2. How do Rumination and Social Problem Solving Intensify Depression? A Longitudinal Study.

    PubMed

    Hasegawa, Akira; Kunisato, Yoshihiko; Morimoto, Hiroshi; Nishimura, Haruki; Matsuda, Yuko

    2018-01-01

    In order to examine how rumination and social problem solving intensify depression, the present study investigated longitudinal associations among each dimension of rumination and social problem solving and evaluated aspects of these constructs that predicted subsequent depression. A three-wave longitudinal study, with an interval of 4 weeks between waves, was conducted. Japanese university students completed the Beck Depression Inventory-Second Edition, Ruminative Responses Scale, Social Problem-Solving Inventory-Revised Short Version, and Interpersonal Stress Event Scale on three occasions 4 weeks apart ( n  = 284 at Time 1, 198 at Time 2, 165 at Time 3). Linear mixed models were analyzed to test whether each variable predicted subsequent depression, rumination, and each dimension of social problem solving. Rumination and negative problem orientation demonstrated a mutually enhancing relationship. Because these two variables were not associated with interpersonal conflict during the subsequent 4 weeks, rumination and negative problem orientation appear to strengthen each other without environmental change. Rumination and impulsivity/carelessness style were associated with subsequent depressive symptoms, after controlling for the effect of initial depression. Because rumination and impulsivity/carelessness style were not concurrently and longitudinally associated with each other, rumination and impulsive/careless problem solving style appear to be independent processes that serve to intensify depression.

  3. Some Cognitive Characteristics of Night-Sky Watchers: Correlations between Social Problem-Solving, Need for Cognition, and Noctcaelador

    ERIC Educational Resources Information Center

    Kelly, William E.

    2005-01-01

    This study explored the relationship between night-sky watching and self-reported cognitive variables: need for cognition and social problem-solving. University students (N = 140) completed the Noctcaelador Inventory, the Need for Cognition Scale, and the Social Problem Solving Inventory. The results indicated that an interest in the night-sky was…

  4. Investigating Prospective Teachers' Perceived Problem-Solving Abilities in Relation to Gender, Major, Place Lived, and Locus of Control

    ERIC Educational Resources Information Center

    Çakir, Mustafa

    2017-01-01

    The purpose of this study is to investigate prospective teachers' perceived personal problem-solving competencies in relation to gender, major, place lived, and internal-external locus of control. The Personal Problem-Solving Inventory and Rotter's Internal-External Locus of Control Scale were used to collect data from freshman teacher candidates…

  5. An Academic Survey Concerning High School and University Students' Attitudes and Approaches to Problem Solving in Chemistry

    ERIC Educational Resources Information Center

    Duran, Muharrem

    2016-01-01

    The aim of this study is to reveal differences between attitudes and approaches of students from different types of high school and the first grade of university towards problem solving in chemistry. For this purpose, the scale originally developed by Mason and Singh (2010) to measure students' attitude and approaches towards problem solving in…

  6. Elementary School Students Perception Levels of Problem Solving Skills

    ERIC Educational Resources Information Center

    Yavuz, Günes; Yasemin, Deringöl; Arslan, Çigdem

    2017-01-01

    The purpose of this study is to reveal the perception levels of problem solving skills of elementary school students. The sample of the study is formed by totally 264 elementary students attending to 5th, 6th, 7th and 8th grade in a big city in Turkey. Data were collected by means of "Perception Scale for Problem Solving Skills" which…

  7. Extreme value problems without calculus: a good link with geometry and elementary maths

    NASA Astrophysics Data System (ADS)

    Ganci, Salvatore

    2016-11-01

    Some classical examples of problem solving, where an extreme value condition is required, are here considered and/or revisited. The search for non-calculus solutions appears pedagogically useful and intriguing as shown through a rich literature. A teacher, who teaches both maths and physics, (as happens in Italian High schools) can find in these kinds of problems a mind stimulating exercise compared with the standard solution obtained by the differential calculus. A good link between the geometric and analytical explanations is so established.

  8. Numerical Simulation of Two Phase Flows

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    2001-01-01

    Two phase flows can be found in broad situations in nature, biology, and industry devices and can involve diverse and complex mechanisms. While the physical models may be specific for certain situations, the mathematical formulation and numerical treatment for solving the governing equations can be general. Hence, we will require information concerning each individual phase as needed in a single phase. but also the interactions between them. These interaction terms, however, pose additional numerical challenges because they are beyond the basis that we use to construct modern numerical schemes, namely the hyperbolicity of equations. Moreover, due to disparate differences in time scales, fluid compressibility and nonlinearity become acute, further complicating the numerical procedures. In this paper, we will show the ideas and procedure how the AUSM-family schemes are extended for solving two phase flows problems. Specifically, both phases are assumed in thermodynamic equilibrium, namely, the time scales involved in phase interactions are extremely short in comparison with those in fluid speeds and pressure fluctuations. Details of the numerical formulation and issues involved are discussed and the effectiveness of the method are demonstrated for several industrial examples.

  9. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  10. An exploratory study of the relationship between changes in emotion and cognitive processes and treatment outcome in borderline personality disorder.

    PubMed

    McMain, Shelley; Links, Paul S; Guimond, Tim; Wnuk, Susan; Eynan, Rahel; Bergmans, Yvonne; Warwar, Serine

    2013-01-01

    This exploratory study examined specific emotion processes and cognitive problem-solving processes in individuals with borderline personality disorder (BPD), and assessed the relationship of these changes to treatment outcome. Emotion and cognitive problem-solving processes were assessed using the Toronto Alexithymia Scale, the Linguistic Inquiry Word Count, the Derogatis Affect Balance Scale, and the Problem Solving Inventory. Participants who showed greater improvements in affect balance, problem solving, and the ability to identify and describe emotions showed greater improvements on treatment outcome, with affect balance remaining statistically significant under the most conservative conditions. The results provide preliminary evidence to support the theory that specific improvements in emotion and cognitive processes are associated with positive treatment outcomes (symptom distress, interpersonal functioning) in BPD. The implications for treatment are discussed.

  11. Autobiographical memory, interpersonal problem solving, and suicidal behavior in adolescent inpatients.

    PubMed

    Arie, Miri; Apter, Alan; Orbach, Israel; Yefet, Yael; Zalsman, Gil; Zalzman, Gil

    2008-01-01

    The aim of the study was to test Williams' (Williams JMG. Depression and the specificity of autobiographical memory. In: Rubin D, ed. Remembering Our Past: Studies in Autobiographical Memory. London: Cambridge University Press; 1996:244-267.) theory of suicidal behavior in adolescents and young adults by examining the relationship among suicidal behaviors, defective ability to retrieve specific autobiographical memories, impaired interpersonal problem solving, negative life events, repression, and hopelessness. Twenty-five suicidal adolescent and young adult inpatients (16.5 y +/- 2.5) were compared with 25 nonsuicidal adolescent and young adult inpatients (16.5 y +/- 2.5) and 25 healthy controls. Autobiographical memory was tested by a word association test; problem solving by the means-ends problem solving technique; negative life events by the Coddington scale; repression by the Life Style Index; hopelessness by the Beck scale; suicidal risk by the Plutchik scale, and suicide attempt by clinical history. Impairment in the ability to produce specific autobiographical memories, difficulties with interpersonal problem solving, negative life events, and repression were all associated with hopelessness and suicidal behavior. There were significant correlations among all the variables except for repression and negative life events. These findings support Williams' notion that generalized autobiographical memory is associated with deficits in interpersonal problem solving, negative life events, hopelessness, and suicidal behavior. The finding that defects in autobiographical memory are associated with suicidal behavior in adolescents and young adults may lead to improvements in the techniques of cognitive behavioral therapy in this age group.

  12. Coping, problem solving, depression, and health-related quality of life in patients receiving outpatient stroke rehabilitation.

    PubMed

    Visser, Marieke M; Heijenbrok-Kal, Majanka H; Spijker, Adriaan Van't; Oostra, Kristine M; Busschbach, Jan J; Ribbers, Gerard M

    2015-08-01

    To investigate whether patients with high and low depression scores after stroke use different coping strategies and problem-solving skills and whether these variables are related to psychosocial health-related quality of life (HRQOL) independent of depression. Cross-sectional study. Two rehabilitation centers. Patients participating in outpatient stroke rehabilitation (N=166; mean age, 53.06±10.19y; 53% men; median time poststroke, 7.29mo). Not applicable. Coping strategy was measured using the Coping Inventory for Stressful Situations; problem-solving skills were measured using the Social Problem Solving Inventory-Revised: Short Form; depression was assessed using the Center for Epidemiologic Studies Depression Scale; and HRQOL was measured using the five-level EuroQol five-dimensional questionnaire and the Stroke-Specific Quality of Life Scale. Independent samples t tests and multivariable regression analyses, adjusted for patient characteristics, were performed. Compared with patients with low depression scores, patients with high depression scores used less positive problem orientation (P=.002) and emotion-oriented coping (P<.001) and more negative problem orientation (P<.001) and avoidance style (P<.001). Depression score was related to all domains of both general HRQOL (visual analog scale: β=-.679; P<.001; utility: β=-.009; P<.001) and stroke-specific HRQOL (physical HRQOL: β=-.020; P=.001; psychosocial HRQOL: β=-.054, P<.001; total HRQOL: β=-.037; P<.001). Positive problem orientation was independently related to psychosocial HRQOL (β=.086; P=.018) and total HRQOL (β=.058; P=.031). Patients with high depression scores use different coping strategies and problem-solving skills than do patients with low depression scores. Independent of depression, positive problem-solving skills appear to be most significantly related to better HRQOL. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. Association Between Anticipatory Grief and Problem Solving Among Family Caregivers of Persons with Cognitive Impairment

    PubMed Central

    Fowler, Nicole R.; Hansen, Alexandra S.; Barnato, Amber E.; Garand, Linda

    2013-01-01

    Objective Measure perceived involvement in medical decision making and determine if anticipatory grief is associated with problem solving among family caregivers of older adults with cognitive impairment. Method Retrospective analysis of baseline data from a caregiver intervention (n=73). Multivariable regression models testing the association between caregivers’ anticipatory grief, measured by the Anticipatory Grief Scale (AGS), with problem solving abilities, measured by the Social Problem Solving Inventory – Revised: Short Form (SPSI-R: S). Results 47/73 (64%) of caregivers reported involvement in medical decision making. Mean AGS was 70.1 (± 14.8) and mean SPSI-R:S was 107.2 (± 11.6). Higher AGS scores were associated with lower positive problem orientation (P=0.041) and higher negative problem orientation scores (P=0.001) but not other components of problem solving- rational problem solving, avoidance style, and impulsivity/carelessness style. Discussion Higher anticipatory grief among family caregivers impaired problem solving, which could have negative consequences for their medical decision making responsibilities. PMID:23428394

  14. The Investigation of Problem Solving Skill of the Mountaineers in Terms of Demographic Variables

    ERIC Educational Resources Information Center

    Gürer, Burak

    2015-01-01

    The aim of this research is to investigate problem solving skills of the individuals involved in mountaineering. 315 volunteers participated in the study. The research data were collected by problem solving scale developed by Heppner and Peterson and the Turkish version of which was conducted by Sahin et al. There are totally 35 items and only 3…

  15. Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian

    2011-06-01

    Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.

  16. The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems

    PubMed Central

    Baars, Martine; Wijnia, Lisette; Paas, Fred

    2017-01-01

    Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way. PMID:28848467

  17. An Investigation of Taiwanese Early Adolescents' Self-Evaluations Concerning the Big 6 Information Problem-Solving Approach

    ERIC Educational Resources Information Center

    Chang, Chiung-Sui

    2007-01-01

    The study developed a Big 6 Information Problem-Solving Scale (B61PS), including the subscales of task definition and information-seeking strategies, information access and synthesis, and evaluation. More than 1,500 fifth and sixth graders in Taiwan responded. The study revealed that the scale showed adequate reliability in assessing the…

  18. Integrated case management for work-related upper-extremity disorders: impact of patient satisfaction on health and work status.

    PubMed

    Feuerstein, Michael; Huang, Grant D; Ortiz, Jose M; Shaw, William S; Miller, Virginia I; Wood, Patricia M

    2003-08-01

    An integrated case management (ICM) approach (ergonomic and problem-solving intervention) to work-related upper-extremity disorders was examined in relation to patient satisfaction, future symptom severity, function, and return to work (RTW). Federal workers with work-related upper-extremity disorder workers' compensation claims (n = 205) were randomly assigned to usual care or ICM intervention. Patient satisfaction was assessed after the 4-month intervention period. Questionnaires on clinical outcomes and ergonomic exposure were administered at baseline and at 6- and 12-months postintervention. Time from intervention to RTW was obtained from an administrative database. ICM group assignment was significantly associated with greater patient satisfaction. Regression analyses found higher patient satisfaction levels predicted decreased symptom severity and functional limitations at 6 months and a shorter RTW. At 12 months, predictors of positive outcomes included male gender, lower distress, lower levels of reported ergonomic exposure, and receipt of ICM. Findings highlight the utility of targeting workplace ergonomic and problem solving skills.

  19. Cognitive Profiles of Mathematical Problem Solving Learning Disability for Different Definitions of Disability

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Fletcher, Jack M.; Fuchs, Douglas; Hamlett, Carol L.

    2014-01-01

    Three cohorts of third-grade students (N = 813) were evaluated on achievement, cognitive abilities, and behavioral attention according to contrasting research traditions in defining math learning disability (LD) status: low achievement versus extremely low achievement and IQ-achievement discrepant versus strictly low-achieving LD. We use methods from these two traditions to form math problem solving LD groups. To evaluate group differences, we used MANOVA-based profile and canonical analyses to control for relations among the outcomes and regression to control for group definition variables. Results suggest that basic arithmetic is the key distinguishing characteristic that separates low-achieving problem solvers (including LD, regardless of definition) from typically achieving students. Word problem solving is the key distinguishing characteristic that separates IQ-achievement-discrepant from strictly low-achieving LD students, favoring the IQ-achievement-discrepant students. PMID:24939971

  20. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  1. Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method

    PubMed Central

    Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter

    2017-01-01

    An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851

  2. On the Development of an Efficient Parallel Hybrid Solver with Application to Acoustically Treated Aero-Engine Nacelles

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj

    2006-01-01

    A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.

  3. Finite difference and Runge-Kutta methods for solving vibration problems

    NASA Astrophysics Data System (ADS)

    Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi

    2017-11-01

    The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.

  4. Quantum algorithm for solving some discrete mathematical problems by probing their energy spectra

    NASA Astrophysics Data System (ADS)

    Wang, Hefeng; Fan, Heng; Li, Fuli

    2014-01-01

    When a probe qubit is coupled to a quantum register that represents a physical system, the probe qubit will exhibit a dynamical response only when it is resonant with a transition in the system. Using this principle, we propose a quantum algorithm for solving discrete mathematical problems based on the circuit model. Our algorithm has favorable scaling properties in solving some discrete mathematical problems.

  5. Relations of social problem solving with interpersonal competence in Japanese students.

    PubMed

    Sumi, Katsunori

    2011-12-01

    To clarify the relations of the dimensions of social problem solving with those of interpersonal competence in a sample of 234 Japanese college students, Japanese versions of the Social Problem-solving Inventory-Revised and the Social Skill Scale were administered. Pearson correlations between the two sets of variables were low, but higher within each set of subscales. Cronbach's alpha was low for four subscales assessing interpersonal competence.

  6. Social problem-solving among adolescents treated for depression.

    PubMed

    Becker-Weidman, Emily G; Jacobs, Rachel H; Reinecke, Mark A; Silva, Susan G; March, John S

    2010-01-01

    Studies suggest that deficits in social problem-solving may be associated with increased risk of depression and suicidality in children and adolescents. It is unclear, however, which specific dimensions of social problem-solving are related to depression and suicidality among youth. Moreover, rational problem-solving strategies and problem-solving motivation may moderate or predict change in depression and suicidality among children and adolescents receiving treatment. The effect of social problem-solving on acute treatment outcomes were explored in a randomized controlled trial of 439 clinically depressed adolescents enrolled in the Treatment for Adolescents with Depression Study (TADS). Measures included the Children's Depression Rating Scale-Revised (CDRS-R), the Suicidal Ideation Questionnaire--Grades 7-9 (SIQ-Jr), and the Social Problem-Solving Inventory-Revised (SPSI-R). A random coefficients regression model was conducted to examine main and interaction effects of treatment and SPSI-R subscale scores on outcomes during the 12-week acute treatment stage. Negative problem orientation, positive problem orientation, and avoidant problem-solving style were non-specific predictors of depression severity. In terms of suicidality, avoidant problem-solving style and impulsiveness/carelessness style were predictors, whereas negative problem orientation and positive problem orientation were moderators of treatment outcome. Implications of these findings, limitations, and directions for future research are discussed. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  8. Problem-Solving: Scaling the "Brick Wall"

    ERIC Educational Resources Information Center

    Benson, Dave

    2011-01-01

    Across the primary and secondary phases, pupils are encouraged to use and apply their knowledge, skills, and understanding of mathematics to solve problems in a variety of forms, ranging from single-stage word problems to the challenge of extended rich tasks. Amongst many others, Cockcroft (1982) emphasised the importance and relevance of…

  9. Pre-service mathematics teachers’ ability in solving well-structured problem

    NASA Astrophysics Data System (ADS)

    Paradesa, R.

    2018-01-01

    This study aimed to describe the mathematical problem-solving ability of undergraduate students of mathematics education in solving the well-structured problem. The type of this study was qualitative descriptive. The subjects in this study were 100 undergraduate students of Mathematics Education at one of the private universities in Palembang city. The data in this study was collected through two test items with essay form. The results of this study showed that, from the first problem, only 8% students can solve it, but do not check back again to validate the process. Based on a scoring rubric that follows Polya strategy, their answer satisfied 2 4 2 0 patterns. But, from the second problem, 45% students satisfied it. This is because the second problem imitated from the example that was given in learning process. The average score of undergraduate students mathematical problem-solving ability in solving well-structured problems showed 56.00 with standard deviation was 13.22. It means that, from 0 - 100 scale, undergraduate students mathematical problem-solving ability can be categorized low. From this result, the conclusion was undergraduate students of mathematics education in Palembang still have a problem in solving mathematics well-structured problem.

  10. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  11. A problem-solving education intervention in caregivers and patients during allogeneic hematopoietic stem cell transplantation.

    PubMed

    Bevans, Margaret; Wehrlen, Leslie; Castro, Kathleen; Prince, Patricia; Shelburne, Nonniekaye; Soeken, Karen; Zabora, James; Wallen, Gwenyth R

    2014-05-01

    The aim of this study was to determine the effect of problem-solving education on self-efficacy and distress in informal caregivers of allogeneic hematopoietic stem cell transplantation patients. Patient/caregiver teams attended three 1-hour problem-solving education sessions to help cope with problems during hematopoietic stem cell transplantation. Primary measures included the Cancer Self-Efficacy Scale-transplant and Brief Symptom Inventory-18. Active caregivers reported improvements in self-efficacy (p < 0.05) and distress (p < 0.01) post-problem-solving education; caregiver responders also reported better health outcomes such as fatigue. The effect of problem-solving education on self-efficacy and distress in hematopoietic stem cell transplantation caregivers supports its inclusion in future interventions to meet the multifaceted needs of this population.

  12. A novel heuristic algorithm for capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre

    2017-09-01

    The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.

  13. Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving

    PubMed Central

    Semeniuk, Yulia Yuriyivna; Brown, Roger L.; Riesch, Susan K.

    2016-01-01

    We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem solving skill. The intervention is based on the Circumplex Model and Social Problem Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. PMID:26936844

  14. An investigation of Taiwanese early adolescents' self-evaluations concerning the Big 6 information problem-solving approach.

    PubMed

    Chang, Chiung-Sui

    2007-01-01

    The study developed a Big 6 Information Problem-Solving Scale (B61PS), including the subscales of task definition and information-seeking strategies, information access and synthesis, and evaluation. More than 1,500 fifth and sixth graders in Taiwan responded. The study revealed that the scale showed adequate reliability in assessing the adolescents' perceptions about the Big 6 information problem-solving approach. In addition, the adolescents had quite different responses toward different subscales of the approach. Moreover, females tended to have higher quality information-searching skills than their male counterparts. The adolescents of different grades also displayed varying views toward the approach. Other results are also provided.

  15. When self-reliance is not safe: associations between reduced help-seeking and subsequent mental health symptoms in suicidal adolescents.

    PubMed

    Labouliere, Christa D; Kleinman, Marjorie; Gould, Madelyn S

    2015-04-01

    The majority of suicidal adolescents have no contact with mental health services, and reduced help-seeking in this population further lessens the likelihood of accessing treatment. A commonly-reported reason for not seeking help is youths' perception that they should solve problems on their own. In this study, we explore associations between extreme self-reliance behavior (i.e., solving problems on your own all of the time), help-seeking behavior, and mental health symptoms in a community sample of adolescents. Approximately 2150 adolescents, across six schools, participated in a school-based suicide prevention screening program, and a subset of at-risk youth completed a follow-up interview two years later. Extreme self-reliance was associated with reduced help-seeking, clinically-significant depressive symptoms, and serious suicidal ideation at the baseline screening. Furthermore, in a subset of youth identified as at-risk at the baseline screening, extreme self-reliance predicted level of suicidal ideation and depressive symptoms two years later even after controlling for baseline symptoms. Given these findings, attitudes that reinforce extreme self-reliance behavior may be an important target for youth suicide prevention programs. Reducing extreme self-reliance in youth with suicidality may increase their likelihood of appropriate help-seeking and concomitant reductions in symptoms.

  16. When Self-Reliance Is Not Safe: Associations between Reduced Help-Seeking and Subsequent Mental Health Symptoms in Suicidal Adolescents

    PubMed Central

    Labouliere, Christa D.; Kleinman, Marjorie; Gould, Madelyn S.

    2015-01-01

    The majority of suicidal adolescents have no contact with mental health services, and reduced help-seeking in this population further lessens the likelihood of accessing treatment. A commonly-reported reason for not seeking help is youths’ perception that they should solve problems on their own. In this study, we explore associations between extreme self-reliance behavior (i.e., solving problems on your own all of the time), help-seeking behavior, and mental health symptoms in a community sample of adolescents. Approximately 2150 adolescents, across six schools, participated in a school-based suicide prevention screening program, and a subset of at-risk youth completed a follow-up interview two years later. Extreme self-reliance was associated with reduced help-seeking, clinically-significant depressive symptoms, and serious suicidal ideation at the baseline screening. Furthermore, in a subset of youth identified as at-risk at the baseline screening, extreme self-reliance predicted level of suicidal ideation and depressive symptoms two years later even after controlling for baseline symptoms. Given these findings, attitudes that reinforce extreme self-reliance behavior may be an important target for youth suicide prevention programs. Reducing extreme self-reliance in youth with suicidality may increase their likelihood of appropriate help-seeking and concomitant reductions in symptoms. PMID:25837350

  17. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  18. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  19. Fast Eigensolver for Computing 3D Earth's Normal Modes

    NASA Astrophysics Data System (ADS)

    Shi, J.; De Hoop, M. V.; Li, R.; Xi, Y.; Saad, Y.

    2017-12-01

    We present a novel parallel computational approach to compute Earth's normal modes. We discretize Earth via an unstructured tetrahedral mesh and apply the continuous Galerkin finite element method to the elasto-gravitational system. To resolve the eigenvalue pollution issue, following the analysis separating the seismic point spectrum, we utilize explicitly a representation of the displacement for describing the oscillations of the non-seismic modes in the fluid outer core. Effectively, we separate out the essential spectrum which is naturally related to the Brunt-Väisälä frequency. We introduce two Lanczos approaches with polynomial and rational filtering for solving this generalized eigenvalue problem in prescribed intervals. The polynomial filtering technique only accesses the matrix pair through matrix-vector products and is an ideal candidate for solving three-dimensional large-scale eigenvalue problems. The matrix-free scheme allows us to deal with fluid separation and self-gravitation in an efficient way, while the standard shift-and-invert method typically needs an explicit shifted matrix and its factorization. The rational filtering method converges much faster than the standard shift-and-invert procedure when computing all the eigenvalues inside an interval. Both two Lanczos approaches solve for the internal eigenvalues extremely accurately, comparing with the standard eigensolver. In our computational experiments, we compare our results with the radial earth model benchmark, and visualize the normal modes using vector plots to illustrate the properties of the displacements in different modes.

  20. A comparative study of the effects of problem-solving skills training and relaxation on the score of self-esteem in women with postpartum depression

    PubMed Central

    Nasiri, Saeideh; Kordi, Masoumeh; Gharavi, Morteza Modares

    2015-01-01

    Background: Self-esteem is a determinant factor of mental health. Individuals with low self-esteem have depression, and low self-esteem is one of main symptoms of depression. Aim of this study is to compare the effects of problem-solving skills and relaxation on the score of self-esteem in women with postpartum depression. Materials and Methods: This clinical trial was performed on 80 women. Sampling was done in Mashhad healthy centers from December 2009 to June 2010. Women were randomly divided and assigned to problem-solving skills (n = 26), relaxation (n = 26), and control groups (n = 28). Interventions were implemented for 6 weeks and the subjects again completed Eysenck self-esteem scale 9 weeks after delivery. Data analysis was done by descriptive statistics, Kruskal–Wallis test, and analysis of variance (ANOVA) test by SPSS software. Results: The findings showed that the mean of self-esteem scale scores was 117.9 ± 9.7 after intervention in the problem-solving group, 117.0 ± 11.8 in the relaxation group, and 113.5 ± 10.4 in the control group and there was significant difference between the groups of relaxation and problem solving, and also between intervention groups and control group. Conclusions: According to the results, problem-solving skills and relaxation can be used to prevent and recover from postpartum depression. PMID:25709699

  1. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  2. Solving the flatness problem with an anisotropic instanton in Hořava-Lifshitz gravity

    NASA Astrophysics Data System (ADS)

    Bramberger, Sebastian F.; Coates, Andrew; Magueijo, João; Mukohyama, Shinji; Namba, Ryo; Watanabe, Yota

    2018-02-01

    In Hořava-Lifshitz gravity a scaling isotropic in space but anisotropic in spacetime, often called "anisotropic scaling," with the dynamical critical exponent z =3 , lies at the base of its renormalizability. This scaling also leads to a novel mechanism of generating scale-invariant cosmological perturbations, solving the horizon problem without inflation. In this paper we propose a possible solution to the flatness problem, in which we assume that the initial condition of the Universe is set by a small instanton respecting the same scaling. We argue that the mechanism may be more general than the concrete model presented here. We rely simply on the deformed dispersion relations of the theory, and on equipartition of the various forms of energy at the starting point.

  3. Application of symbolic/numeric matrix solution techniques to the NASTRAN program

    NASA Technical Reports Server (NTRS)

    Buturla, E. M.; Burroughs, S. H.

    1977-01-01

    The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.

  4. The perceived problem-solving ability of nurse managers.

    PubMed

    Terzioglu, Fusun

    2006-07-01

    The development of a problem-solving approach to nursing has been one of the more important changes in nursing during the last decade. Nurse Managers need to have effective problem-solving and management skills to be able to decrease the cost of the health care and to increase the quality of care. This descriptive study was conducted to determine the perceived problem-solving ability of nurse managers. From a population of 87 nurse managers, 71 were selected using the stratified random sampling method, 62 nurse managers agreed to participate. Data were collected through a questionnaire including demographic information and a problem-solving inventory. The problem-solving inventory was developed by Heppner and Petersen in 1982, and validity and readability studies were done. It was adapted to Turkish by Sahin et al (1993). The acquired data have been evaluated on the software spss 10.0 programme, using percentages, mean values, one-way anova and t-test (independent samples t-test). Most of the nurses had 11 or more years of working experience (71%) and work as charge nurses in the clinics. It was determined that 69.4% of the nurse managers did not have any educational training in administration. The most encountered problems stated were issues related to managerial (30.6%) and professional staff (25.8%). It was identified that nurse managers who had received education about management, following scientific publication and scientific meeting and had followed management models, perceived their problem-resolving skills as more adequate than the others (P>0.05). In this study, it was determined that nurses do not perceive that they have problem-solving skills at a desired level. In this context, it is extremely important that this subject be given an important place in both nursing education curriculum and continuing education programmes.

  5. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  6. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  7. Development and validation of the hypoglycaemia problem-solving scale for people with diabetes mellitus

    PubMed Central

    Juang, Jyuhn-Huarng; Lin, Chia-Hung

    2016-01-01

    Objective To develop and psychometrically test a new instrument, the hypoglycaemia problem-solving scale (HPSS), which was designed to measure how well people with diabetes mellitus manage their hypoglycaemia-related problems. Methods A cross-sectional survey design approach was used to validate the performance assessment instrument. Patients who had a diagnosis of type 1 or type 2 diabetes mellitus for at least 1 year, who were being treated with insulin and who had experienced at least one hypoglycaemic episode within the previous 6 months were eligible for inclusion in the study. Results A total of 313 patients were included in the study. The initial draft of the HPSS included 28 items. After exploratory factor analysis, the 24-item HPSS consisted of seven factors: problem-solving perception, detection control, identifying problem attributes, setting problem-solving goals, seeking preventive strategies, evaluating strategies, and immediate management. The Cronbach’s α for the total HPSS was 0.83. Conclusions The HPSS was verified as being valid and reliable. Future studies should further test and improve the instrument to increase its effectiveness in helping people with diabetes manage their hypoglycaemia-related problems. PMID:27059292

  8. Development and validation of the hypoglycaemia problem-solving scale for people with diabetes mellitus.

    PubMed

    Wu, Fei-Ling; Juang, Jyuhn-Huarng; Lin, Chia-Hung

    2016-06-01

    To develop and psychometrically test a new instrument, the hypoglycaemia problem-solving scale (HPSS), which was designed to measure how well people with diabetes mellitus manage their hypoglycaemia-related problems. A cross-sectional survey design approach was used to validate the performance assessment instrument. Patients who had a diagnosis of type 1 or type 2 diabetes mellitus for at least 1 year, who were being treated with insulin and who had experienced at least one hypoglycaemic episode within the previous 6 months were eligible for inclusion in the study. A total of 313 patients were included in the study. The initial draft of the HPSS included 28 items. After exploratory factor analysis, the 24-item HPSS consisted of seven factors: problem-solving perception, detection control, identifying problem attributes, setting problem-solving goals, seeking preventive strategies, evaluating strategies, and immediate management. The Cronbach's α for the total HPSS was 0.83. The HPSS was verified as being valid and reliable. Future studies should further test and improve the instrument to increase its effectiveness in helping people with diabetes manage their hypoglycaemia-related problems. © The Author(s) 2016.

  9. Temperament and problem solving in a population of adolescent guide dogs.

    PubMed

    Bray, Emily E; Sammel, Mary D; Seyfarth, Robert M; Serpell, James A; Cheney, Dorothy L

    2017-09-01

    It is often assumed that measures of temperament within individuals are more correlated to one another than to measures of problem solving. However, the exact relationship between temperament and problem-solving tasks remains unclear because large-scale studies have typically focused on each independently. To explore this relationship, we tested 119 prospective adolescent guide dogs on a battery of 11 temperament and problem-solving tasks. We then summarized the data using both confirmatory factor analysis and exploratory principal components analysis. Results of confirmatory analysis revealed that a priori separation of tests as measuring either temperament or problem solving led to weak results, poor model fit, some construct validity, and no predictive validity. In contrast, results of exploratory analysis were best summarized by principal components that mixed temperament and problem-solving traits. These components had both construct and predictive validity (i.e., association with success in the guide dog training program). We conclude that there is complex interplay between tasks of "temperament" and "problem solving" and that the study of both together will be more informative than approaches that consider either in isolation.

  10. Effect of Tutorial Giving on The Topic of Special Theory of Relativity in Modern Physics Course Towards Students’ Problem-Solving Ability

    NASA Astrophysics Data System (ADS)

    Hartatiek; Yudyanto; Haryoto, Dwi

    2017-05-01

    A Special Theory of Relativity handbook has been successfully arranged to guide students tutorial activity in the Modern Physics course. The low of students’ problem-solving ability was overcome by giving the tutorial in addition to the lecture class. It was done due to the limited time in the class during the course to have students do some exercises for their problem-solving ability. The explicit problem-solving based tutorial handbook was written by emphasizing to this 5 problem-solving strategies: (1) focus on the problem, (2) picture the physical facts, (3) plan the solution, (4) solve the problem, and (5) check the result. This research and development (R&D) consisted of 3 main steps: (1) preliminary study, (2) draft I. product development, and (3) product validation. The developed draft product was validated by experts to measure the feasibility of the material and predict the effect of the tutorial giving by means of questionnaires with scale 1 to 4. The students problem-solving ability in Special Theory of Relativity showed very good qualification. It implied that the tutorial giving with the help of tutorial handbook increased students problem-solving ability. The empirical test revealed that the developed handbook was significantly affected in improving students’ mastery concept and problem-solving ability. Both students’ mastery concept and problem-solving ability were in middle category with gain of 0.31 and 0.41, respectively.

  11. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    NASA Astrophysics Data System (ADS)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  12. Fault-tolerance of a neural network solving the traveling salesman problem

    NASA Technical Reports Server (NTRS)

    Protzel, P.; Palumbo, D.; Arras, M.

    1989-01-01

    This study presents the results of a fault-injection experiment that stimulates a neural network solving the Traveling Salesman Problem (TSP). The network is based on a modified version of Hopfield's and Tank's original method. We define a performance characteristic for the TSP that allows an overall assessment of the solution quality for different city-distributions and problem sizes. Five different 10-, 20-, and 30- city cases are sued for the injection of up to 13 simultaneous stuck-at-0 and stuck-at-1 faults. The results of more than 4000 simulation-runs show the extreme fault-tolerance of the network, especially with respect to stuck-at-0 faults. One possible explanation for the overall surprising result is the redundancy of the problem representation.

  13. Mexican Hat Wavelet Kernel ELM for Multiclass Classification.

    PubMed

    Wang, Jie; Song, Yi-Fan; Ma, Tian-Lei

    2017-01-01

    Kernel extreme learning machine (KELM) is a novel feedforward neural network, which is widely used in classification problems. To some extent, it solves the existing problems of the invalid nodes and the large computational complexity in ELM. However, the traditional KELM classifier usually has a low test accuracy when it faces multiclass classification problems. In order to solve the above problem, a new classifier, Mexican Hat wavelet KELM classifier, is proposed in this paper. The proposed classifier successfully improves the training accuracy and reduces the training time in the multiclass classification problems. Moreover, the validity of the Mexican Hat wavelet as a kernel function of ELM is rigorously proved. Experimental results on different data sets show that the performance of the proposed classifier is significantly superior to the compared classifiers.

  14. Problem-Solving After Traumatic Brain Injury in Adolescence: Associations With Functional Outcomes

    PubMed Central

    Wade, Shari L.; Cassedy, Amy E.; Fulks, Lauren E.; Taylor, H. Gerry; Stancin, Terry; Kirkwood, Michael W.; Yeates, Keith O.; Kurowski, Brad G.

    2017-01-01

    Objective To examine the association of problem-solving with functioning in youth with traumatic brain injury (TBI). Design Cross-sectional evaluation of pretreatment data from a randomized controlled trial. Setting Four children’s hospitals and 1 general hospital, with level 1 trauma units. Participants Youth, ages 11 to 18 years, who sustained moderate or severe TBI in the last 18 months (N=153). Main Outcome Measures Problem-solving skills were assessed using the Social Problem-Solving Inventory (SPSI) and the Dodge Social Information Processing Short Stories. Everyday functioning was assessed based on a structured clinical interview using the Child and Adolescent Functional Assessment Scale (CAFAS) and via adolescent ratings on the Youth Self Report (YSR). Correlations and multiple regression analyses were used to examine associations among measures. Results The TBI group endorsed lower levels of maladaptive problem-solving (negative problem orientation, careless/impulsive responding, and avoidant style) and lower levels of rational problem-solving, resulting in higher total problem-solving scores for the TBI group compared with a normative sample (P<.001). Dodge Social Information Processing Short Stories dimensions were correlated (r=.23–.37) with SPSI subscales in the anticipated direction. Although both maladaptive (P<.001) and adaptive (P=.006) problem-solving composites were associated with overall functioning on the CAFAS, only maladaptive problem-solving (P<.001) was related to the YSR total when outcomes were continuous. For the both CAFAS and YSR logistic models, maladaptive style was significantly associated with greater risk of impairment (P=.001). Conclusions Problem-solving after TBI differs from normative samples and is associated with functional impairments. The relation of problem-solving deficits after TBI with global functioning merits further investigation, with consideration of the potential effects of problem-solving interventions on functional outcomes. PMID:28389109

  15. Problem-Solving After Traumatic Brain Injury in Adolescence: Associations With Functional Outcomes.

    PubMed

    Wade, Shari L; Cassedy, Amy E; Fulks, Lauren E; Taylor, H Gerry; Stancin, Terry; Kirkwood, Michael W; Yeates, Keith O; Kurowski, Brad G

    2017-08-01

    To examine the association of problem-solving with functioning in youth with traumatic brain injury (TBI). Cross-sectional evaluation of pretreatment data from a randomized controlled trial. Four children's hospitals and 1 general hospital, with level 1 trauma units. Youth, ages 11 to 18 years, who sustained moderate or severe TBI in the last 18 months (N=153). Problem-solving skills were assessed using the Social Problem-Solving Inventory (SPSI) and the Dodge Social Information Processing Short Stories. Everyday functioning was assessed based on a structured clinical interview using the Child and Adolescent Functional Assessment Scale (CAFAS) and via adolescent ratings on the Youth Self Report (YSR). Correlations and multiple regression analyses were used to examine associations among measures. The TBI group endorsed lower levels of maladaptive problem-solving (negative problem orientation, careless/impulsive responding, and avoidant style) and lower levels of rational problem-solving, resulting in higher total problem-solving scores for the TBI group compared with a normative sample (P<.001). Dodge Social Information Processing Short Stories dimensions were correlated (r=.23-.37) with SPSI subscales in the anticipated direction. Although both maladaptive (P<.001) and adaptive (P=.006) problem-solving composites were associated with overall functioning on the CAFAS, only maladaptive problem-solving (P<.001) was related to the YSR total when outcomes were continuous. For the both CAFAS and YSR logistic models, maladaptive style was significantly associated with greater risk of impairment (P=.001). Problem-solving after TBI differs from normative samples and is associated with functional impairments. The relation of problem-solving deficits after TBI with global functioning merits further investigation, with consideration of the potential effects of problem-solving interventions on functional outcomes. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  16. Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving.

    PubMed

    Semeniuk, Yulia Yuriyivna; Brown, Roger L; Riesch, Susan K

    2016-07-01

    We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem-solving skill. The intervention is based on the Circumplex Model and Social Problem-Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem-Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. © The Author(s) 2016.

  17. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  18. Large-scale studies on the transferability of general problem-solving skills and the pedagogic potential of physics

    NASA Astrophysics Data System (ADS)

    Mashood, K. K.; Singh, Vijay A.

    2013-09-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in highly competitive problem-solving examinations was studied using a massive database. The sample sizes ranged from hundreds to a few hundred thousand. Encouraged by the presence of significant correlations, we interviewed 20 students to explore the pedagogic potential of physics in imparting transferable problem-solving skills. We report strategies and practices relevant to physics employed by these students which foster transfer.

  19. Solving LP Relaxations of Large-Scale Precedence Constrained Problems

    NASA Astrophysics Data System (ADS)

    Bienstock, Daniel; Zuckerberg, Mark

    We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.

  20. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  1. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  2. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  3. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Some Problems of Industrial Scale-Up.

    ERIC Educational Resources Information Center

    Jackson, A. T.

    1985-01-01

    Scientific ideas of the biological laboratory are turned into economic realities in industry only after several problems are solved. Economics of scale, agitation, heat transfer, sterilization of medium and air, product recovery, waste disposal, and future developments are discussed using aerobic respiration as the example in the scale-up…

  5. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  6. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  7. Doing Solar Science With Extreme-ultraviolet and X-ray High Resolution Imaging Spectroscopy

    NASA Astrophysics Data System (ADS)

    Doschek, G. A.

    2005-12-01

    In this talk I will demonstrate how high resolution extreme-ultraviolet (EUV) and/or X-ray imaging spectroscopy can be used to provide unique information for solving several current key problems of the solar atmosphere, e.g., the morphology and reconnection site of solar flares, the structure of the transition region, and coronal heating. I will describe the spectra that already exist relevant to these problems and what the shortcomings of the data are, and how an instrument such as the Extreme-ultraviolet Imaging Spectrometer (EIS) on Solar-B as well as other proposed spectroscopy missions such as NEXUS and RAM will improve on the existing observations. I will discuss a few particularly interesting properties of the spectra and atomic data for highly ionized atoms that are important for the science problems.

  8. Prospects for mirage mediation

    NASA Astrophysics Data System (ADS)

    Pierce, Aaron; Thaler, Jesse

    2006-09-01

    Mirage mediation reduces the fine-tuning in the minimal supersymmetric standard model by dynamically arranging a cancellation between anomaly-mediated and modulus-mediated supersymmetry breaking. We explore the conditions under which a mirage ``messenger scale'' is generated near the weak scale and the little hierarchy problem is solved. We do this by explicitly including the dynamics of the SUSY-breaking sector needed to cancel the cosmological constant. The most plausible scenario for generating a low mirage scale does not readily admit an extra-dimensional interpretation. We also review the possibilities for solving the μ/Bμ problem in such theories, a potential hidden source of fine-tuning.

  9. Cross-borehole flowmeter tests for transient heads in heterogeneous aquifers.

    PubMed

    Le Borgne, Tanguy; Paillet, Frederick; Bour, Olivier; Caudal, Jean-Pierre

    2006-01-01

    Cross-borehole flowmeter tests have been proposed as an efficient method to investigate preferential flowpaths in heterogeneous aquifers, which is a major task in the characterization of fractured aquifers. Cross-borehole flowmeter tests are based on the idea that changing the pumping conditions in a given aquifer will modify the hydraulic head distribution in large-scale flowpaths, producing measurable changes in the vertical flow profiles in observation boreholes. However, inversion of flow measurements to derive flowpath geometry and connectivity and to characterize their hydraulic properties is still a subject of research. In this study, we propose a framework for cross-borehole flowmeter test interpretation that is based on a two-scale conceptual model: discrete fractures at the borehole scale and zones of interconnected fractures at the aquifer scale. We propose that the two problems may be solved independently. The first inverse problem consists of estimating the hydraulic head variations that drive the transient borehole flow observed in the cross-borehole flowmeter experiments. The second inverse problem is related to estimating the geometry and hydraulic properties of large-scale flowpaths in the region between pumping and observation wells that are compatible with the head variations deduced from the first problem. To solve the borehole-scale problem, we treat the transient flow data as a series of quasi-steady flow conditions and solve for the hydraulic head changes in individual fractures required to produce these data. The consistency of the method is verified using field experiments performed in a fractured-rock aquifer.

  10. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    NASA Astrophysics Data System (ADS)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  11. PROSPECTIVE ASSOCIATIONS OF DEPRESSIVE RUMINATION AND SOCIAL PROBLEM SOLVING WITH DEPRESSION: A 6-MONTH LONGITUDINAL STUDY(.).

    PubMed

    Hasegawa, Akira; Hattori, Yosuke; Nishimura, Haruki; Tanno, Yoshihiko

    2015-06-01

    The main purpose of this study was to examine whether depressive rumination and social problem solving are prospectively associated with depressive symptoms. Nonclinical university students (N = 161, 64 men, 97 women; M age = 19.7 yr., SD = 3.6, range = 18-61) recruited from three universities in Japan completed the Beck Depression Inventory-Second Edition (BDI-II), the Ruminative Responses Scale, Social Problem-Solving Inventory-Revised Short Version (SPSI-R:S), and the Means-Ends Problem-Solving Procedure at baseline, and the BDI-II again at 6 mo. later. A stepwise multiple regression analysis with the BDI-II and all subscales of the rumination and social problem solving measures as independent variables indicated that only the BDI-II scores and the Impulsivity/carelessness style subscale of the SPSI-R:S at Time 1 were significantly associated with BDI-II scores at Time 2 (β = 0.73, 0.12, respectively; independent variables accounted for 58.8% of the variance). These findings suggest that in Japan an impulsive and careless problem-solving style was prospectively associated with depressive symptomatology 6 mo. later, as contrasted with previous findings of a cycle of rumination and avoidance problem-solving style.

  12. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  13. Resource Economics

    NASA Astrophysics Data System (ADS)

    Conrad, Jon M.

    2000-01-01

    Resource Economics is a text for students with a background in calculus, intermediate microeconomics, and a familiarity with the spreadsheet software Excel. The book covers basic concepts, shows how to set up spreadsheets to solve dynamic allocation problems, and presents economic models for fisheries, forestry, nonrenewable resources, stock pollutants, option value, and sustainable development. Within the text, numerical examples are posed and solved using Excel's Solver. These problems help make concepts operational, develop economic intuition, and serve as a bridge to the study of real-world problems of resource management. Through these examples and additional exercises at the end of Chapters 1 to 8, students can make dynamic models operational, develop their economic intuition, and learn how to set up spreadsheets for the simulation of optimization of resource and environmental systems. Book is unique in its use of spreadsheet software (Excel) to solve dynamic allocation problems Conrad is co-author of a previous book for the Press on the subject for graduate students Approach is extremely student-friendly; gives students the tools to apply research results to actual environmental issues

  14. Collaborative Problem-Solving Environments; Proceedings for the Workshop CPSEs for Scientific Research, San Diego, California, June 20 to July 1, 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George

    1999-01-11

    A workshop on collaborative problem-solving environments (CPSEs) was held June 29 through July 1, 1999, in San Diego, California. The workshop was sponsored by the U.S. Department of Energy and the High Performance Network Applications Team of the Large Scale Networking Working Group. The workshop brought together researchers and developers from industry, academia, and government to identify, define, and discuss future directions in collaboration and problem-solving technologies in support of scientific research.

  15. Analysis of mathematical problem-solving ability based on metacognition on problem-based learning

    NASA Astrophysics Data System (ADS)

    Mulyono; Hadiyanti, R.

    2018-03-01

    Problem-solving is the primary purpose of the mathematics curriculum. Problem-solving abilities influenced beliefs and metacognition. Metacognition as superordinate capabilities can direct, regulate cognition and motivation and then problem-solving processes. This study aims to (1) test and analyzes the quality of problem-based learning and (2) investigate the problem-solving capabilities based on metacognition. This research uses mixed method study with The subject research are class XI students of Mathematics and Science at High School Kesatrian 2 Semarang which divided into tacit use, aware use, strategic use and reflective use level. The collecting data using scale, interviews, and tests. The data processed with the proportion of test, t-test, and paired samples t-test. The result shows that the students with levels tacit use were able to complete the whole matter given, but do not understand what and why a strategy is used. Students with aware use level were able to solve the problem, be able to build new knowledge through problem-solving to the indicators, understand the problem, determine the strategies used, although not right. Students on the Strategic ladder Use can be applied and adopt a wide variety of appropriate strategies to solve the issues and achieved re-examine indicators of process and outcome. The student with reflective use level is not found in this study. Based on the results suggested that study about the identification of metacognition in problem-solving so that the characteristics of each level of metacognition more clearly in a more significant sampling. Teachers need to know in depth about the student metacognitive activity and its relationship with mathematical problem solving and another problem resolution.

  16. Bringing NASA Technology Down to Earth

    NASA Technical Reports Server (NTRS)

    Lockney, Daniel P.; Taylor, Terry L.

    2018-01-01

    Whether putting rovers on Mars or sustaining life in extreme conditions, NASA develops technologies to solve some of the most difficult challenges ever faced. Through its Technology Transfer Program, the agency makes the innovations behind space exploration available to industry, academia, and the general public. This paper describes the primary mechanisms through which NASA disseminates technology to solve real-life problems; illustrates recent program accomplishments; and provides examples of spinoff success stories currently impacting everyday life.

  17. Examining the Epistemological Beliefs and Problem Solving Skills of Preservice Teachers during Teaching Practice

    ERIC Educational Resources Information Center

    Erdamar, Gurcu; Alpan, Gulgun

    2013-01-01

    This study aims to examine the development of preservice teachers' epistemological beliefs and problem solving skills in the process of teaching practice. Participants of this descriptive study were senior students from Gazi University's Faculty of Vocational Education ("n" = 189). They completed the Epistemological Belief Scale and…

  18. ADHD and Problem-Solving in Play

    ERIC Educational Resources Information Center

    Borg, Suzanne

    2009-01-01

    This paper reports a small-scale study to determine whether there is a difference in problem-solving abilities, from a play perspective, between individuals who are diagnosed as ADHD and are on medication and those not on medication. Ten children, five of whom where on medication and five not, diagnosed as ADHD predominantly inattentive type, were…

  19. Validation of the Solving Problems Scale with Teachers

    ERIC Educational Resources Information Center

    Ryan, Mary Elizabeth

    2011-01-01

    Rapid advancements in technology, global competitiveness, and an increasing demand for 21st-century skills, such as problem-solving, underscore the pivotal role teachers play to prepare our youth for an era of exponential change. Those at the forefront of education are challenged to equip students with skills and strategies necessary to think…

  20. Results and Implications of a Problem-Solving Treatment Program for Obesity.

    ERIC Educational Resources Information Center

    Mahoney, B. K.; And Others

    Data are from a large scale experimental study which was designed to evaluate a multimethod problem solving approach to obesity. Obese adult volunteers (N=90) were randomly assigned to three groups: maximal treatment, minimal treatment, and no treatment control. In the two treatment groups, subjects were exposed to bibliographic material and…

  1. The Development of Complex Problem Solving in Adolescence: A Latent Growth Curve Analysis

    ERIC Educational Resources Information Center

    Frischkorn, Gidon T.; Greiff, Samuel; Wüstenberg, Sascha

    2014-01-01

    Complex problem solving (CPS) as a cross-curricular competence has recently attracted more attention in educational psychology as indicated by its implementation in international educational large-scale assessments such as the Programme for International Student Assessment. However, research on the development of CPS is scarce, and the few…

  2. Large-Scale Studies on the Transferability of General Problem-Solving Skills and the Pedagogic Potential of Physics

    ERIC Educational Resources Information Center

    Mashood, K. K.; Singh, Vijay A.

    2013-01-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…

  3. The Strengthening Families Program 10-14: influence on parent and youth problem-solving skill.

    PubMed

    Semeniuk, Y; Brown, R L; Riesch, S K; Zywicki, M; Hopper, J; Henriques, J B

    2010-06-01

    The aim of this paper is to report the results of a preliminary examination of the efficacy of the Strengthening Families Program (SFP) 10-14 in improving parent and youth problem-solving skill. The Hypotheses in this paper include: (1) youth and parents who participated in SFP would have lower mean scores immediately (T2) and 6 months (T3) post intervention on indicators of hostile and negative problem-solving strategies; (2) higher mean scores on positive problem-solving strategies; and (3) youth who participated in SFP would have higher mean scores at T2 and at T3 on indicators of individual problem solving and problem-solving efficacy than youth in the comparison group. The dyads were recruited from elementary schools that had been stratified for race and assigned randomly to intervention or comparison conditions. Mean age of youth was 11 years (SD = 1.04). Fifty-seven dyads (34-intervention&23-control) were videotaped discussing a frequently occurring problem. The videotapes were analysed using the Iowa Family Interaction Rating Scale (IFIRS) and data were analysed using Dyadic Assessment Intervention Model. Most mean scores on the IFIRS did not change. One score changed as predicted: youth hostility decreased at T3. Two scores changed contrary to prediction: parent hostility increased T3 and parent positive problem solving decreased at T2. SFP demonstrated questionable efficacy for problem-solving skill in this study.

  4. Internet computer coaches for introductory physics problem solving

    NASA Astrophysics Data System (ADS)

    Xu Ryan, Qing

    The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the educational system, national studies have shown that the majority of students emerge from such courses having made little progress toward developing good problem-solving skills. The Physics Education Research Group at the University of Minnesota has been developing Internet computer coaches to help students become more expert-like problem solvers. During the Fall 2011 and Spring 2013 semesters, the coaches were introduced into large sections (200+ students) of the calculus based introductory mechanics course at the University of Minnesota. This dissertation, will address the research background of the project, including the pedagogical design of the coaches and the assessment of problem solving. The methodological framework of conducting experiments will be explained. The data collected from the large-scale experimental studies will be discussed from the following aspects: the usage and usability of these coaches; the usefulness perceived by students; and the usefulness measured by final exam and problem solving rubric. It will also address the implications drawn from this study, including using this data to direct future coach design and difficulties in conducting authentic assessment of problem-solving.

  5. A family of conjugate gradient methods for large-scale nonlinear equations.

    PubMed

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  6. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  7. Problem-solving skills and hardiness as protective factors against stress in Iranian nurses.

    PubMed

    Abdollahi, Abbas; Talib, Mansor Abu; Yaacob, Siti Nor; Ismail, Zanariah

    2014-02-01

    Nursing is a stressful occupation, even when compared with other health professions; therefore, it is necessary to advance our knowledge about the protective factors that can help reduce stress among nurses. The present study sought to investigate the associations among problem-solving skills and hardiness with perceived stress in nurses. The participants, 252 nurses from six private hospitals in Tehran, completed the Personal Views Survey, the Perceived Stress Scale, and the Problem-Solving Inventory. Structural Equation Modeling (SEM) was used to analyse the data and answer the research hypotheses. As expected, greater hardiness was associated with low levels of perceived stress, and nurses low in perceived stress were more likely to be considered approachable, have a style that relied on their own sense of internal personal control, and demonstrate effective problem-solving confidence. These findings reinforce the importance of hardiness and problem-solving skills as protective factors against perceived stress among nurses, and could be important in training future nurses so that hardiness ability and problem-solving skills can be imparted, allowing nurses to have more ability to control their perceived stress.

  8. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  9. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  10. Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Men Chunhua; Romeijn, H. Edwin; Jia Xun

    2010-11-15

    Purpose: To develop a novel aperture-based algorithm for volumetric modulated arc therapy (VMAT) treatment plan optimization with high quality and high efficiency. Methods: The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequentialmore » way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. Results: The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. Conclusions: The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.« less

  11. Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT).

    PubMed

    Men, Chunhua; Romeijn, H Edwin; Jia, Xun; Jiang, Steve B

    2010-11-01

    To develop a novel aperture-based algorithm for volumetric modulated are therapy (VMAT) treatment plan optimization with high quality and high efficiency. The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequential way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.

  12. Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults

    ERIC Educational Resources Information Center

    Rubin, Allen; Yu, Miao

    2017-01-01

    This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…

  13. Designs for Operationalizing Collaborative Problem Solving for Automated Assessment

    ERIC Educational Resources Information Center

    Scoular, Claire; Care, Esther; Hesse, Friedrich W.

    2017-01-01

    Collaborative problem solving is a complex skill set that draws on social and cognitive factors. The construct remains in its infancy due to lack of empirical evidence that can be drawn upon for validation. The differences and similarities between two large-scale initiatives that reflect this state of the art, in terms of underlying assumptions…

  14. VET Workers' Problem-Solving Skills in Technology-Rich Environments: European Approach

    ERIC Educational Resources Information Center

    Hämäläinen, Raija; Cincinnato, Sebastiano; Malin, Antero; De Wever, Bram

    2014-01-01

    The European workplace is challenging VET adults' problem-solving skills in technology-rich environments (TREs). So far, no international large-scale assessment data has been available for VET. The PIAAC data comprise the most comprehensive source of information on adults' skills to date. The present study (N = 50 369) focuses on gaining insight…

  15. Complex Problem Solving in Educational Contexts--Something beyond "g": Concept, Assessment, Measurement Invariance, and Construct Validity

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno

    2013-01-01

    Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…

  16. Impulsive-Analytic Disposition in Mathematical Problem Solving: A Survey and a Mathematics Test

    ERIC Educational Resources Information Center

    Lim, Kien H.; Wagler, Amy

    2012-01-01

    The Likelihood-to-Act (LtA) survey and a mathematics test were used in this study to assess students' impulsive-analytic disposition in the context of mathematical problem solving. The results obtained from these two instruments were compared to those obtained using two widely-used scales: Need for Cognition (NFC) and Barratt Impulsivity Scale…

  17. Differential Relations between Facets of Complex Problem Solving and Students' Immigration Background

    ERIC Educational Resources Information Center

    Sonnleitner, Philipp; Brunner, Martin; Keller, Ulrich; Martin, Romain

    2014-01-01

    Whereas the assessment of complex problem solving (CPS) has received increasing attention in the context of international large-scale assessments, its fairness in regard to students' cultural background has gone largely unexplored. On the basis of a student sample of 9th-graders (N = 299), including a representative number of immigrant students (N…

  18. Assessment of Complex Problem Solving: What We Know and What We Don't Know

    ERIC Educational Resources Information Center

    Herde, Christoph Nils; Wüstenberg, Sascha; Greiff, Samuel

    2016-01-01

    Complex Problem Solving (CPS) is seen as a cross-curricular 21st century skill that has attracted interest in large-scale-assessments. In the Programme for International Student Assessment (PISA) 2012, CPS was assessed all over the world to gain information on students' skills to acquire and apply knowledge while dealing with nontransparent…

  19. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types.

    PubMed

    Webb, Margaret E; Little, Daniel R; Cropper, Simon J

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions.

  20. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types

    PubMed Central

    Webb, Margaret E.; Little, Daniel R.; Cropper, Simon J.

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions. PMID:27725805

  1. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  2. Realistic anomaly-mediated supersymmetry breaking

    NASA Astrophysics Data System (ADS)

    Chacko, Zacharia; Luty, Markus A.; Maksymyk, Ivan; Pontón, Eduardo

    2000-03-01

    We consider supersymmetry breaking communicated entirely by the superconformal anomaly in supergravity. This scenario is naturally realized if supersymmetry is broken in a hidden sector whose couplings to the observable sector are suppressed by more than powers of the Planck scale, as occurs if supersymmetry is broken in a parallel universe living in extra dimensions. This scenario is extremely predictive: soft supersymmetry breaking couplings are completely determined by anomalous dimensions in the effective theory at the weak scale. Gaugino and scalar masses are naturally of the same order, and flavor-changing neutral currents are automatically suppressed. The most glaring problem with this scenario is that slepton masses are negative in the minimal supersymmetric standard model. We point out that this problem can be simply solved by coupling extra Higgs doublets to the leptons. Lepton flavor-changing neutral currents can be naturally avoided by approximate symmetries. We also describe more speculative solutions involving compositeness near the weak scale. We then turn to electroweak symmetry breaking. Adding an explicit μ term gives a value for Bμ that is too large by a factor of ~ 100. We construct a realistic model in which the μ term arises from the vacuum expectation value of a singlet field, so all weak-scale masses are directly related to m3/2. We show that fully realistic electroweak symmetry breaking can occur in this model with moderate fine-tuning.

  3. Best candidates for cognitive treatment of illness perceptions in chronic low back pain: results of a theory-driven predictor study.

    PubMed

    Siemonsma, Petra C; Stuvie, Ilse; Roorda, Leo D; Vollebregt, Joke A; Lankhorst, Gustaaf J; Lettinga, Ant T

    2011-04-01

    The aim of this study was to identify treatment-specific predictors of the effectiveness of a method of evidence-based treatment: cognitive treatment of illness perceptions. This study focuses on what treatment works for whom, whereas most prognostic studies focusing on chronic non-specific low back pain rehabilitation aim to reduce the heterogeneity of the population of patients who are suitable for rehabilitation treatment in general. Three treatment-specific predictors were studied in patients with chronic non-specific low back pain receiving cognitive treatment of illness perceptions: a rational approach to problem-solving, discussion skills and verbal skills. Hierarchical linear regression analysis was used to assess their predictive value. Short-term changes in physical activity, measured with the Patient-Specific Functioning List, were the outcome measure for cognitive treatment of illness perceptions effect. A total of 156 patients with chronic non-specific low back pain participated in the study. Rational problem-solving was found to be a significant predictor for the change in physical activity. Discussion skills and verbal skills were non-significant. Rational problem-solving explained 3.9% of the total variance. The rational problem-solving scale results are encouraging, because chronic non-specific low back pain problems are complex by nature and can be influenced by a variety of factors. A minimum score of 44 points on the rational problem-solving scale may assist clinicians in selecting the most appropriate candidates for cognitive treatment of illness perceptions.

  4. Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.

    PubMed Central

    Musen, M. A.

    1998-01-01

    When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community. PMID:9929181

  5. Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.

    PubMed

    Musen, M A

    1998-01-01

    When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  6. [Computer-assisted education in problem-solving in neurology; a randomized educational study].

    PubMed

    Weverling, G J; Stam, J; ten Cate, T J; van Crevel, H

    1996-02-24

    To determine the effect of computer-based medical teaching (CBMT) as a supplementary method to teach clinical problem-solving during the clerkship in neurology. Randomized controlled blinded study. Academic Medical Centre, Amsterdam, the Netherlands. 103 Students were assigned at random to a group with access to CBMT and a control group. CBMT consisted of 20 computer-simulated patients with neurological diseases, and was permanently available during five weeks to students in the CBMT group. The ability to recognize and solve neurological problems was assessed with two free-response tests, scored by two blinded observers. The CBMT students scored significantly better on the test related to the CBMT cases (mean score 7.5 on a zero to 10 point scale; control group 6.2; p < 0.001). There was no significant difference on the control test not related to the problems practised with CBMT. CBMT can be an effective method for teaching clinical problem-solving, when used as a supplementary teaching facility during a clinical clerkship. The increased ability to solve problems learned by CBMT had no demonstrable effect on the performance with other neurological problems.

  7. Investigating the role of future thinking in social problem solving.

    PubMed

    Noreen, Saima; Whyte, Katherine E; Dritschel, Barbara

    2015-03-01

    There is well-established evidence that both rumination and depressed mood negatively impact the ability to solve social problems. A preliminary stage of the social problem solving process may be the process of catapulting oneself forward in time to think about the consequences of a problem before attempting to solve it. The aim of the present study was to examine how thinking about the consequences of a social problem being resolved or unresolved prior to solving it influences the solution of the problem as a function of levels of rumination and dysphoric mood. Eighty six participants initially completed the Beck Depression Inventory- II (BDI-II) and the Ruminative Response Scale (RRS). They were then presented with six social problems and generated consequences for half of the problems being resolved and half of the problems remaining unresolved. Participants then solved some of the problems, and following a delay, were asked to recall all of the consequences previously generated. Participants reporting higher levels of depressed mood and rumination were less effective at generating problem solutions. Specifically, those reporting higher levels of rumination produced less effective solutions for social problems that they had previously generated unresolved than resolved consequences. We also found that individuals higher in rumination, irrespective of depressed mood recalled more of the unresolved consequences in a subsequent memory test. As participants did not solve problems for scenarios where no consequences were generated, no baseline measure of problem solving was obtained. Our results suggest thinking about the consequences of a problem remaining unresolved may impair the generation of effective solutions in individuals with higher levels of rumination. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Performance of Extended Local Clustering Organization (LCO) for Large Scale Job-Shop Scheduling Problem (JSP)

    NASA Astrophysics Data System (ADS)

    Konno, Yohko; Suzuki, Keiji

    This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.

  9. [Prevalence of and factors related to depression in high school students].

    PubMed

    Eskin, Mehmet; Ertekin, Kamil; Harlak, Hacer; Dereboy, Ciğdem

    2008-01-01

    The study aimed at investigating the prevalence of and factors related to depression in high school students. A total of 805 (n = 367 girls; n = 438 boys) first year students from three high schools in the city of Aydin filled in a self-report questionnaire that contained questions about socio-demographics, academic achievement and religious belief. It included also a depression rating scale, social support scale, problem solving inventory and an assertiveness scale. T-tests, chi-square tests, Pearson moment products correlation coefficients, and logistic regression analysis were used to analyze the data. 141 students (17.5%) scored on and above the cut-off point on the Children Depression Inventory (CDI). In the first regression analyses low self-esteem, low grade point average (GPA) and low perceived social support from friends in boys, and low self-esteem, low paternal educational level and low social support from friends were the predictors of girls' depression. When self-esteem scores were excluded, low GPA, low perceived social support from friends and family, and inefficient problem solving skills were predictors of depression in boys; low perceived social support from friends and family, low paternal educational level, and inefficient problem solving skills were the independent predictors of depression in girls. Depression is prevalent in high school students. Low self-esteem, low perceived social support from peers and family, and inefficient problem solving skills appears to be risk factors for adolescent depression. Low GPA for boys and low paternal education for girls were gender specific risk factors. Psychosocial interventions geared for increasing self-esteem, social support and problem solving skills may be effective in the prevention and treatment of adolescent depression.

  10. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  11. Inverse problems in the design, modeling and testing of engineering systems

    NASA Technical Reports Server (NTRS)

    Alifanov, Oleg M.

    1991-01-01

    Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.

  12. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    NASA Astrophysics Data System (ADS)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  13. The Effect of Training Problem-Solving Skills on Coping Skills of Depressed Nursing and Midwifery Students

    PubMed Central

    Ebrahimi, Hossein; Barzanjeh Atri, Shirin; Ghavipanjeh, Somayeh; Farnam, Alireza; Gholizadeh, Leyla

    2013-01-01

    Introduction: Nurses have a considerable role in caring and health promotion. Depressed nurses are deficient in their coping skills that are important in mental health. This study evaluated the effectiveness of training problem-solving skills on coping skills of depressed nursing and midwifery students. Methods: The Beck Depression Scale and coping skills questionnaire were administered in Tabriz and Urmia nursing and midwifery schools. 92 students, who had achieved a score above 10 on the Beck Depression Scale, were selected. 46 students as study group and 46 students as control group were selected randomly. The intervention group received six sessions of problem-solving training within three weeks. Finally, after the end of sessions, coping skills and depression scales were administered and analyzed for both groups. Results: Comparing the mean coping skills showed that before the intervention there were no significant differences between the control and study groups. However, after the intervention, a significant difference was observed between the control group and the study group. By comparing the mean coping skills before and after the intervention, a significant difference was observed in the study group. Conclusion: Training problem-solving skills increased the coping skills of depressed students. According to the role of coping skills in people's mental health, increasing coping skills can promote mental health, provide the basis for caring skills, and improve the quality of nurses’ caring skills. PMID:25276704

  14. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  15. A generalized Condat's algorithm of 1D total variation regularization

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2017-09-01

    A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.

  16. Self-interacting inelastic dark matter: a viable solution to the small scale structure problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Clementz, Stefan; Herrero-Garcia, Juan, E-mail: emb@kth.se, E-mail: scl@kth.se, E-mail: juan.herrero-garcia@adelaide.edu.au

    2017-03-01

    Self-interacting dark matter has been proposed as a solution to the small-scale structure problems, such as the observed flat cores in dwarf and low surface brightness galaxies. If scattering takes place through light mediators, the scattering cross section relevant to solve these problems may fall into the non-perturbative regime leading to a non-trivial velocity dependence, which allows compatibility with limits stemming from cluster-size objects. However, these models are strongly constrained by different observations, in particular from the requirements that the decay of the light mediator is sufficiently rapid (before Big Bang Nucleosynthesis) and from direct detection. A natural solution tomore » reconcile both requirements are inelastic endothermic interactions, such that scatterings in direct detection experiments are suppressed or even kinematically forbidden if the mass splitting between the two-states is sufficiently large. Using an exact solution when numerically solving the Schrödinger equation, we study such scenarios and find regions in the parameter space of dark matter and mediator masses, and the mass splitting of the states, where the small scale structure problems can be solved, the dark matter has the correct relic abundance and direct detection limits can be evaded.« less

  17. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less

  18. An Observational Study for Evaluating the Effects of Interpersonal Problem-Solving Skills Training on Behavioural Dimensions

    ERIC Educational Resources Information Center

    Anliak, Sakire; Sahin, Derya

    2010-01-01

    The present observational study was designed to evaluate the effectiveness of the I Can Problem Solve (ICPS) programme on behavioural change from aggression to pro-social behaviours by using the DECB rating scale. Non-participant observation method was used to collect data in pretest-training-posttest design. It was hypothesised that the ICPS…

  19. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  20. Lesion mapping of social problem solving

    PubMed Central

    Colom, Roberto; Paul, Erick J.; Chau, Aileen; Solomon, Jeffrey; Grafman, Jordan H.

    2014-01-01

    Accumulating neuroscience evidence indicates that human intelligence is supported by a distributed network of frontal and parietal regions that enable complex, goal-directed behaviour. However, the contributions of this network to social aspects of intellectual function remain to be well characterized. Here, we report a human lesion study (n = 144) that investigates the neural bases of social problem solving (measured by the Everyday Problem Solving Inventory) and examine the degree to which individual differences in performance are predicted by a broad spectrum of psychological variables, including psychometric intelligence (measured by the Wechsler Adult Intelligence Scale), emotional intelligence (measured by the Mayer, Salovey, Caruso Emotional Intelligence Test), and personality traits (measured by the Neuroticism-Extraversion-Openness Personality Inventory). Scores for each variable were obtained, followed by voxel-based lesion–symptom mapping. Stepwise regression analyses revealed that working memory, processing speed, and emotional intelligence predict individual differences in everyday problem solving. A targeted analysis of specific everyday problem solving domains (involving friends, home management, consumerism, work, information management, and family) revealed psychological variables that selectively contribute to each. Lesion mapping results indicated that social problem solving, psychometric intelligence, and emotional intelligence are supported by a shared network of frontal, temporal, and parietal regions, including white matter association tracts that bind these areas into a coordinated system. The results support an integrative framework for understanding social intelligence and make specific recommendations for the application of the Everyday Problem Solving Inventory to the study of social problem solving in health and disease. PMID:25070511

  1. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  2. International and domestic mobile satellite regulatory proceedings: A comparison of outcomes and discussion of implications

    NASA Technical Reports Server (NTRS)

    Freibaum, Jerry

    1988-01-01

    It is argued that we are on the threshold of a new multibillion dollar industry that can enhance economic development, dramatically improve disaster assessment and relief operations, improve rural health care and solve many safety and security concerns of the transportation industry. Further delays in resolving conflicts between vested interests will be extremely costly to users, providers and equipment manufacturers. Conference participants are urged to move quickly and decisively towards solving outstanding problems.

  3. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  4. Bifurcations at the dawn of Modern Science

    NASA Astrophysics Data System (ADS)

    Coullet, Pierre

    2012-11-01

    In this article we review two classical bifurcation problems: the instability of an axisymmetric floating body studied by Archimedes, 2300 years ago and the multiplicity of images observed in curved mirrors, a problem which has been solved by Alhazen in the 11th century. We will first introduce these problems in trying to keep some of the flavor of the original analysis and then, we will show how they can be reduced to a question of extremal distances studied by Apollonius.

  5. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  6. [The application of new technologies to solving maths problems for students with learning disabilities: the 'underwater school'].

    PubMed

    Miranda-Casas, A; Marco-Taverner, R; Soriano-Ferrer, M; Melià de Alba, A; Simó-Casañ, P

    2008-01-01

    Different procedures have demonstrated efficacy to teach cognitive and metacognitive strategies to problem solving in mathematics. Some studies have used computer-based problem solving instructional programs. To analyze in students with learning disabilities the efficacy of a cognitive strategies training for problem solving, with three instructional delivery formats: a teacher-directed program (T-D), a computer-assisted instructional (CAI) program, and a combined program (T-D + CAI). Forty-four children with mathematics learning disabilities, between 8 and 10 years old participated in this study. The children were randomly assigned to one of the three instructional formats and a control group without cognitive strategies training. In the three instructional conditions which were compared all the students learnt problems solving linguistic and visual cognitive strategies trough the self-instructional procedure. Several types of measurements were used for analysing the possible differential efficacy of the three instructional methods implemented: solving problems tests, marks in mathematics, internal achievement responsibility scale, and school behaviours teacher ratings. Our findings show that the T-D training group and the T-D + CAI group improved significantly on math word problem solving and on marks in Maths from pre- to post-testing. In addition, the results indicated that the students of the T-D + CAI group solved more real-life problems and developed more internal attributions compared to both control and CAI groups. Finally, with regard to school behaviours, improvements in school adjustment and learning problems were observed in the students of the group with a combined instructional format (T-D + CAI).

  7. Congressional Testimony on School Violence: Early Childhood, Youth and Families Subcommittee.

    ERIC Educational Resources Information Center

    Poland, Scott

    1998-01-01

    School violence has been linked to youth not recognizing the finality of death, extreme violence portrayed in the media, availability of guns, student reluctance to "tell," and lack of curriculum that teaches children anger management and problem-solving skills. Recommendations include making prevention programs a priority and…

  8. Changing University Students' Alternative Conceptions of Optics by Active Learning

    ERIC Educational Resources Information Center

    Hadžibegovic, Zalkida; Sliško, Josip

    2013-01-01

    Active learning is individual and group participation in effective activities such as in-class observing, writing, experimenting, discussion, solving problems, and talking about to-be-learned topics. Some instructors believe that active learning is impossible, or at least extremely difficult to achieve in large lecture sessions. Nevertheless, the…

  9. The Relationship of Social Problem-Solving Skills and Dysfunctional Attitudes with Risk of Drug Abuse among Dormitory Students at Isfahan University of Medical Sciences

    PubMed Central

    Nasrazadani, Ehteram; Maghsoudi, Jahangir; Mahrabi, Tayebeh

    2017-01-01

    Background: Dormitory students encounter multiple social factors which cause pressure, such as new social relationships, fear of the future, and separation from family, which could cause serious problems such as tendency toward drug abuse. This research was conducted with the goal to determine social problem-solving skills, dysfunctional attitudes, and risk of drug abuse among dormitory students of Isfahan University of Medical Sciences, Iran. Materials and Methods: This was a descriptive-analytical, correlational, and cross-sectional research. The research sample consisted of 211 students living in dormitories. The participants were selected using randomized quota sampling method. The data collection tools included the Social Problem-Solving Inventory (SPSI), Dysfunctional Attitude Scale (DAS), and Identifying People at Risk of Addiction Questionnaire. Results: The results indicated an inverse relationship between social problem-solving skills and risk of drug abuse (P = 0.0002), a direct relationship between dysfunctional attitude and risk of drug abuse (P = 0.030), and an inverse relationship between social problem-solving skills and dysfunctional attitude among students (P = 0.0004). Conclusions: Social problem-solving skills have a correlation with dysfunctional attitudes. As a result, teaching these skills and the way to create efficient attitudes should be considered in dormitory students. PMID:28904539

  10. The Relationship of Social Problem-Solving Skills and Dysfunctional Attitudes with Risk of Drug Abuse among Dormitory Students at Isfahan University of Medical Sciences.

    PubMed

    Nasrazadani, Ehteram; Maghsoudi, Jahangir; Mahrabi, Tayebeh

    2017-01-01

    Dormitory students encounter multiple social factors which cause pressure, such as new social relationships, fear of the future, and separation from family, which could cause serious problems such as tendency toward drug abuse. This research was conducted with the goal to determine social problem-solving skills, dysfunctional attitudes, and risk of drug abuse among dormitory students of Isfahan University of Medical Sciences, Iran. This was a descriptive-analytical, correlational, and cross-sectional research. The research sample consisted of 211 students living in dormitories. The participants were selected using randomized quota sampling method. The data collection tools included the Social Problem-Solving Inventory (SPSI), Dysfunctional Attitude Scale (DAS), and Identifying People at Risk of Addiction Questionnaire. The results indicated an inverse relationship between social problem-solving skills and risk of drug abuse ( P = 0.0002), a direct relationship between dysfunctional attitude and risk of drug abuse ( P = 0.030), and an inverse relationship between social problem-solving skills and dysfunctional attitude among students ( P = 0.0004). Social problem-solving skills have a correlation with dysfunctional attitudes. As a result, teaching these skills and the way to create efficient attitudes should be considered in dormitory students.

  11. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  12. The architecture of adaptive neural network based on a fuzzy inference system for implementing intelligent control in photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Gimazov, R.; Shidlovskiy, S.

    2018-05-01

    In this paper, we consider the architecture of the algorithm for extreme regulation in the photovoltaic system. An algorithm based on an adaptive neural network with fuzzy inference is proposed. The implementation of such an algorithm not only allows solving a number of problems in existing algorithms for extreme power regulation of photovoltaic systems, but also creates a reserve for the creation of a universal control system for a photovoltaic system.

  13. A constraint optimization based virtual network mapping method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen

    2013-03-01

    Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.

  14. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  15. Neural architecture design based on extreme learning machine.

    PubMed

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  17. A Kohonen-like decomposition method for the Euclidean traveling salesman problem-KNIES/spl I.bar/DECOMPOSE.

    PubMed

    Aras, N; Altinel, I K; Oommen, J

    2003-01-01

    In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.

  18. Applications of remote sensing to estuarine problems. [estuaries of Chesapeake Bay

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.

    1975-01-01

    A variety of siting problems for the estuaries of the lower Chesapeake Bay have been solved with cost beneficial remote sensing techniques. Principal techniques used were repetitive 1:30,000 color photography of dye emitting buoys to map circulation patterns, and investigation of water color boundaries via color and color infrared imagery to scales of 1:120,000. Problems solved included sewage outfall siting, shoreline preservation and enhancement, oil pollution risk assessment, and protection of shellfish beds from dredge operations.

  19. Framework for Detection and Localization of Extreme Climate Event with Pixel Recursive Super Resolution

    NASA Astrophysics Data System (ADS)

    Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.

    2017-12-01

    Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.

  20. Advanced imaging in acute and chronic deep vein thrombosis

    PubMed Central

    Karande, Gita Yashwantrao; Sanchez, Yadiel; Baliyan, Vinit; Mishra, Vishala; Ganguli, Suvranu; Prabhakar, Anand M.

    2016-01-01

    Deep venous thrombosis (DVT) affecting the extremities is a common clinical problem. Prompt imaging aids in rapid diagnosis and adequate treatment. While ultrasound (US) remains the workhorse of detection of extremity venous thrombosis, CT and MRI are commonly used as the problem-solving tools either to visualize the thrombosis in central veins like superior or inferior vena cava (IVC) or to test for the presence of complications like pulmonary embolism (PE). The cross-sectional modalities also offer improved visualization of venous collaterals. The purpose of this article is to review the established modalities used for characterization and diagnosis of DVT, and further explore promising innovations and recent advances in this field. PMID:28123971

  1. Ways of problem solving as predictors of relapse in alcohol dependent male inpatients.

    PubMed

    Demirbas, Hatice; Ilhan, Inci Ozgur; Dogan, Yildirim Beyatli

    2012-01-01

    The purpose of this study was to identify how remitters and relapsers view their everyday problem solving strategies. A total of 128 male alcohol dependent male inpatients who were hospitalized at the Ankara University Psychiatry Clinic, Alcohol and Substance Abuse Treatment Unit were recruited for the study. Subjects demographic status and alcohol use histories were assessed by a self-report questionnaire. Also, patients were evaluated with The Coopersmith Self-esteem Inventory (CSI), The Spielberger State-Trait Anxiety Scale (STAI-I-II), and The Problem Solving Inventory (PSI). Patients were followed for six months with monthly intervals after hospital discharge. Drinking status was assessed in terms of abstinence and relapse. Data were assessed with Student t-test, and univariate and multivariate analyses. In the logistic regression analysis, age, marital status, employment status and PSI subscores were taken as the independent variables and drinking state at the end of six months as the dependent variable. There were significant differences in reflective and avoidant styles, and monitoring style of problem solving between abstainers and relapses. It was found that subjects who perceived their problem solving style as less avoidant and less reflective were at greater risk to relapse. The findings demonstrated that active engagement in problem solving like utilizing avoidant and reflective styles of problem solving enhances abstinence. In treatment, expanding the behavior repertoire and increasing the variety of ways of problem solving ways that can be utilized in daily life should be one of the major goals of the treatment program. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. The roles of emotional competence and social problem-solving in the relationship between physical abuse and adolescent suicidal ideation in China.

    PubMed

    Kwok, Sylvia Y C L; Yeung, Jerf W K; Low, Andrew Y T; Lo, Herman H M; Tam, Cherry H L

    2015-06-01

    The study investigated the relationship among physical abuse, positive psychological factors including emotional competence and social problem-solving, and suicidal ideation among adolescents in China. The possible moderating effects of emotional competence and social problem-solving in the association between physical abuse and adolescent suicidal ideation were also studied. A cross-sectional survey employing convenience sampling was conducted and self-administered questionnaires were collected from 527 adolescents with mean age of 14 years from the schools in Shanghai. Results showed that physical abuse was significantly and positively related to suicidal ideation in both male and female adolescents. Emotional competence was not found to be significantly associated with adolescent suicidal ideation, but rational problem-solving, a sub-scale of social problem-solving, was shown to be significantly and negatively associated with suicidal ideation for males, but not for females. However, emotional competence and rational problem-solving were shown to be a significant and a marginally significant moderator in the relationship between physical abuse and suicidal ideation in females respectively, but not in males. High rational problem-solving buffered the negative impact of physical abuse on suicidal ideation for females. Interestingly, females with higher empathy and who reported being physically abused by their parents have higher suicidal ideation. Findings are discussed and implications are stated. It is suggested to change the attitudes of parents on the concept of physical abuse, guide them on appropriate attitudes, knowledge and skills in parenting, and enhance adolescents' skills in rational problem-solving. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    PubMed

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  4. [Problem-solving strategies and marital satisfaction].

    PubMed

    Kriegelewicz, Olga

    2006-01-01

    This study investigated the relation between problem-solving strategies in the marital conflict and marital satisfaction. Four problem-solving strategies (Dialogue, Loyalty, Escalation of conflict and Withdrawal) were measured by the Problem-Solving Strategies Inventory, in two versions: self-report and report of partners' perceived behaviour. This measure refers to the concept of Rusbult, Johnson and Morrow, and meets high standards of reliability (alpha Cronbach from alpha = 0.78 to alpha = 0.94) and validity. Marital satisfaction was measured by Marriage Success Scale. The sample was composed of 147 marital couples. The study revealed that satisfied couples, in comparison with non-satisfied couples, tend to use constructive problem-solving strategies (Dialogue and Loyalty). They rarely use destructive strategies like Escalation of conflict or Withdrawal. Dialogue is the strategy connected with satisfaction in a most positive manner. These might be very important guidelines to couples' psychotherapy. Loyalty to oneself is a significant positive predictor of male satisfaction is also own Loyalty. The study shows that constructive attitudes are the most significant predictors of marriage satisfaction. It is therefore worth concentrating mostly on them in the psychotherapeutic process instead of eliminating destructive attitudes.

  5. Cross-syndrome comparison of real-world executive functioning and problem solving using a new problem-solving questionnaire.

    PubMed

    Camp, Joanne S; Karmiloff-Smith, Annette; Thomas, Michael S C; Farran, Emily K

    2016-12-01

    Individuals with neurodevelopmental disorders like Williams syndrome and Down syndrome exhibit executive function impairments on experimental tasks (Lanfranchi, Jerman, Dal Pont, Alberti, & Vianello, 2010; Menghini, Addona, Costanzo, & Vicari, 2010), but the way that they use executive functioning for problem solving in everyday life has not hitherto been explored. The study aim is to understand cross-syndrome characteristics of everyday executive functioning and problem solving. Parents/carers of individuals with Williams syndrome (n=47) or Down syndrome (n=31) of a similar chronological age (m=17 years 4 months and 18 years respectively) as well as those of a group of younger typically developing children (n=34; m=8years 3 months) completed two questionnaires: the Behavior Rating Inventory of Executive Function (BRIEF; Gioia, Isquith, Guy, & Kenworthy, 2000) and a novel Problem-Solving Questionnaire. The rated likelihood of reaching a solution in a problem solving situation was lower for both syndromic groups than the typical group, and lower still for the Williams syndrome group than the Down syndrome group. The proportion of group members meeting the criterion for clinical significance on the BRIEF was also highest for the Williams syndrome group. While changing response, avoiding losing focus and maintaining perseverance were important for problem-solving success in all groups, asking for help and avoiding becoming emotional were also important for the Down syndrome and Williams syndrome groups respectively. Keeping possessions in order was a relative strength amongst BRIEF scales for the Down syndrome group. Results suggest that individuals with Down syndrome tend to use compensatory strategies for problem solving (asking for help and potentially, keeping items well ordered), while for individuals with Williams syndrome, emotional reactions disrupt their problem-solving skills. This paper highlights the importance of identifying syndrome-specific problem-solving strengths and difficulties to improve effective functioning in everyday life. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. A Mixed Integer Efficient Global Optimization Framework: Applied to the Simultaneous Aircraft Design, Airline Allocation and Revenue Management Problem

    NASA Astrophysics Data System (ADS)

    Roy, Satadru

    Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.

  7. Risk of suicide ideation associated with problem-solving ability and attitudes toward suicidal behavior in university students.

    PubMed

    McAuliffe, Carmel; Corcoran, Paul; Keeley, Helen S; Perry, Ivan J

    2003-01-01

    The present paper investigates the risk of lifetime suicide ideation associated with problem-solving ability and attitudes toward suicidal behavior in a sample of 328 university students (41% male, 59% female). The response rate was 77% based on the total number of students registered for the relevant courses. A series of questions assessed lifetime suicide ideation, while problem solving and attitudes toward suicide were measured using the Self-Rating Problem Solving scale and four subscales of the Suicide Opinion Questionnaire, respectively (McLeavey, 1986; Domino et al., 1989). Almost one-third of the students surveyed had lifetime suicide ideation. Both genders were similar in terms of their suicide ideation history, problem solving, and attitudes toward suicidal behavior with the exception that male students were more in agreement with the attitude that suicidal behavior lacks real intent. Compared with 2% of nonideators and ideators, one in four planners reported that they would more than likely attempt suicide at some point in their life. Greater agreement with the attitude that suicidal behavior is normal was associated with significantly increased risk of being an ideator, as was poor problem solving and less agreement with the attitude that suicidal behavior is associated with mental illness.

  8. Problem-solving counseling as a therapeutic tool on youth suicidal behavior in the suburban population in Sri Lanka.

    PubMed

    Perera, E A Ramani; Kathriarachchi, Samudra T

    2011-01-01

    Suicidal behaviour among youth is a major public health concern in Sri Lanka. Prevention of youth suicides using effective, feasible and culturally acceptable methods is invaluable in this regard, however research in this area is grossly lacking. This study aimed at determining the effectiveness of problem solving counselling as a therapeutic intervention in prevention of youth suicidal behaviour in Sri Lanka. This control trial study was based on hospital admissions with suicidal attempts in a sub-urban hospital in Sri Lanka. The study was carried out at Base Hospital Homagama. A sample of 124 was recruited using convenience sampling method and divided into two groups, experimental and control. Control group was offered routine care and experimental group received four sessions of problem solving counselling over one month. Outcome of both groups was measured, six months after the initial screening, using the visual analogue scale. Individualized outcome measures on problem solving counselling showed that problem solving ability among the subjects in the experimental group had improved after four counselling sessions and suicidal behaviour has been reduced. The results are statistically significant. This Study confirms that problem solving counselling is an effective therapeutic tool in management of youth suicidal behaviour in hospital setting in a developing country.

  9. Development of Collaborative Research Initiatives to Advance the Aerospace Sciences-via the Communications, Electronics, Information Systems Focus Group

    NASA Technical Reports Server (NTRS)

    Knasel, T. Michael

    1996-01-01

    The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.

  10. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  11. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  12. An Automatic Orthonormalization Method for Solving Stiff Boundary-Value Problems

    NASA Astrophysics Data System (ADS)

    Davey, A.

    1983-08-01

    A new initial-value method is described, based on a remark by Drury, for solving stiff linear differential two-point cigenvalue and boundary-value problems. The method is extremely reliable, it is especially suitable for high-order differential systems, and it is capable of accommodating realms of stiffness which other methods cannot reach. The key idea behind the method is to decompose the stiff differential operator into two non-stiff operators, one of which is nonlinear. The nonlinear one is specially chosen so that it advances an orthonormal frame, indeed the method is essentially a kind of automatic orthonormalization; the second is auxiliary but it is needed to determine the required function. The usefulness of the method is demonstrated by calculating some eigenfunctions for an Orr-Sommerfeld problem when the Reynolds number is as large as 10°.

  13. How Is Health Related to Literacy, Numeracy, and Technological Problem-Solving Skills among U.S. Adults? Evidence from the Program for the International Assessment of Adult Competencies (PIAAC)

    ERIC Educational Resources Information Center

    Prins, Esther; Monnat, Shannon; Clymer, Carol; Toso, Blaire Wilson

    2015-01-01

    This paper uses data from the Program for the International Assessment of Adult Competencies (PIAAC) to analyze the relationship between U.S. adults' self-reported health and proficiencies in literacy, numeracy, and technological problem solving. Ordinal logistic regression analyses showed that scores on all three scales were positively and…

  14. Robotics and STEM Learning: Students' Achievements in Assignments According to the P3 Task Taxonomy--Practice, Problem Solving, and Projects

    ERIC Educational Resources Information Center

    Barak, Moshe; Assal, Muhammad

    2018-01-01

    This study presents the case of development and evaluation of a STEM-oriented 30-h robotics course for junior high school students (n = 32). Class activities were designed according to the P3 Task Taxonomy, which included: (1) practice-basic closed-ended tasks and exercises; (2) problem solving--small-scale open-ended assignments in which the…

  15. Solving large scale traveling salesman problems by chaotic neurodynamics.

    PubMed

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki

    2002-03-01

    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.

  16. Cross-layer model design in wireless ad hoc networks for the Internet of Things.

    PubMed

    Yang, Xin; Wang, Ling; Xie, Jian; Zhang, Zhaolin

    2018-01-01

    Wireless ad hoc networks can experience extreme fluctuations in transmission traffic in the Internet of Things, which is widely used today. Currently, the most crucial issues requiring attention for wireless ad hoc networks are making the best use of low traffic periods, reducing congestion during high traffic periods, and improving transmission performance. To solve these problems, the present paper proposes a novel cross-layer transmission model based on decentralized coded caching in the physical layer and a content division multiplexing scheme in the media access control layer. Simulation results demonstrate that the proposed model effectively addresses these issues by substantially increasing the throughput and successful transmission rate compared to existing protocols without a negative influence on delay, particularly for large scale networks under conditions of highly contrasting high and low traffic periods.

  17. Cross-layer model design in wireless ad hoc networks for the Internet of Things

    PubMed Central

    Wang, Ling; Xie, Jian; Zhang, Zhaolin

    2018-01-01

    Wireless ad hoc networks can experience extreme fluctuations in transmission traffic in the Internet of Things, which is widely used today. Currently, the most crucial issues requiring attention for wireless ad hoc networks are making the best use of low traffic periods, reducing congestion during high traffic periods, and improving transmission performance. To solve these problems, the present paper proposes a novel cross-layer transmission model based on decentralized coded caching in the physical layer and a content division multiplexing scheme in the media access control layer. Simulation results demonstrate that the proposed model effectively addresses these issues by substantially increasing the throughput and successful transmission rate compared to existing protocols without a negative influence on delay, particularly for large scale networks under conditions of highly contrasting high and low traffic periods. PMID:29734355

  18. Understanding student use of mathematics in IPLS with the Math Epistemic Games Survey

    NASA Astrophysics Data System (ADS)

    Eichenlaub, Mark; Hemingway, Deborah; Redish, Edward F.

    2017-01-01

    We present the Math Epistemic Games Survey (MEGS), a new concept inventory on the use of mathematics in introductory physics for the life sciences. The survey asks questions that are often best-answered via techniques commonly-valued in physics instruction, including dimensional analysis, checking special or extreme cases, understanding scaling relationships, interpreting graphical representations, estimation, and mapping symbols onto physical meaning. MEGS questions are often rooted in quantitative biology. We present preliminary data on the validation and administration of the MEGS in a large, introductory physics for the life sciences course at the University of Maryland, as well as preliminary results on the clustering of questions and responses as a guide to student resource activation in problem solving. This material is based upon work supported by the US National Science Foundation under Award No. 15-04366.

  19. Optimization of Cubic Polynomial Functions without Calculus

    ERIC Educational Resources Information Center

    Taylor, Ronald D., Jr.; Hansen, Ryan

    2008-01-01

    In algebra and precalculus courses, students are often asked to find extreme values of polynomial functions in the context of solving an applied problem; but without the notion of derivative, something is lost. Either the functions are reduced to quadratics, since students know the formula for the vertex of a parabola, or solutions are…

  20. The Case of Thyroid Hormones: How to Learn Physiology by Solving a Detective Case

    ERIC Educational Resources Information Center

    Lellis-Santos, Camilo; Giannocco, Gisele; Nunes, Maria Tereza

    2011-01-01

    Thyroid diseases are prevalent among endocrine disorders, and careful evaluation of patients' symptoms is a very important part in their diagnosis. Developing new pedagogical strategies, such as problem-based learning (PBL), is extremely important to stimulate and encourage medical and biomedical students to learn thyroid physiology and identify…

  1. Human Resources: Solving Work and Life Challenges

    ERIC Educational Resources Information Center

    Bartley, Sharon Jeffcoat

    2003-01-01

    Work-life issues are those problems employees have that impact their ability to perform their work and may lead to increasing levels of stress. Stress over time can lead to low employee morale, lower productivity, decreased job satisfaction and eventually to sickness and absenteeism. In extreme cases, stress can result in substance abuse or…

  2. Cognitive Profiles of Mathematical Problem Solving Learning Disability for Different Definitions of Disability

    ERIC Educational Resources Information Center

    Tolar, Tammy D.; Fuchs, Lynn; Fletcher, Jack M.; Fuchs, Douglas; Hamlett, Carol L.

    2016-01-01

    Three cohorts of third-grade students (N = 813) were evaluated on achievement, cognitive abilities, and behavioral attention according to contrasting research traditions in defining math learning disability (LD) status: low achievement versus extremely low achievement and IQ-achievement discrepant versus strictly low-achieving LD. We use methods…

  3. Mental Images and the Modification of Learning Defects.

    ERIC Educational Resources Information Center

    Patten, Bernard M.

    Because human memory and thought involve extremely complex processes, it is possible to employ unusual modalities and specific visual strategies for remembering and problem-solving to assist patients with memory defects. This three-part paper discusses some of the research in the field of human memory and describes practical applications of these…

  4. Simulations of relativistic quantum plasmas using real-time lattice scalar QED

    NASA Astrophysics Data System (ADS)

    Shi, Yuan; Xiao, Jianyuan; Qin, Hong; Fisch, Nathaniel J.

    2018-05-01

    Real-time lattice quantum electrodynamics (QED) provides a unique tool for simulating plasmas in the strong-field regime, where collective plasma scales are not well separated from relativistic-quantum scales. As a toy model, we study scalar QED, which describes self-consistent interactions between charged bosons and electromagnetic fields. To solve this model on a computer, we first discretize the scalar-QED action on a lattice, in a way that respects geometric structures of exterior calculus and U(1)-gauge symmetry. The lattice scalar QED can then be solved, in the classical-statistics regime, by advancing an ensemble of statistically equivalent initial conditions in time, using classical field equations obtained by extremizing the discrete action. To demonstrate the capability of our numerical scheme, we apply it to two example problems. The first example is the propagation of linear waves, where we recover analytic wave dispersion relations using numerical spectrum. The second example is an intense laser interacting with a one-dimensional plasma slab, where we demonstrate natural transition from wakefield acceleration to pair production when the wave amplitude exceeds the Schwinger threshold. Our real-time lattice scheme is fully explicit and respects local conservation laws, making it reliable for long-time dynamics. The algorithm is readily parallelized using domain decomposition, and the ensemble may be computed using quantum parallelism in the future.

  5. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.

  6. The functional implications of motor, cognitive, psychiatric, and social problem-solving states in Huntington's disease.

    PubMed

    Van Liew, Charles; Gluhm, Shea; Goldstein, Jody; Cronan, Terry A; Corey-Bloom, Jody

    2013-01-01

    Huntington's disease (HD) is a genetic, neurodegenerative disorder characterized by motor, cognitive, and psychiatric dysfunction. In HD, the inability to solve problems successfully affects not only disease coping, but also interpersonal relationships, judgment, and independent living. The aim of the present study was to examine social problem-solving (SPS) in well-characterized HD and at-risk (AR) individuals and to examine its unique and conjoint effects with motor, cognitive, and psychiatric states on functional ratings. Sixty-three participants, 31 HD and 32 gene-positive AR, were included in the study. Participants completed the Social Problem-Solving Inventory-Revised: Long (SPSI-R:L), a 52-item, reliable, standardized measure of SPS. Items are aggregated under five scales (Positive, Negative, and Rational Problem-Solving; Impulsivity/Carelessness and Avoidance Styles). Participants also completed the Unified Huntington's Disease Rating Scale functional, behavioral, and cognitive assessments, as well as additional neuropsychological examinations and the Symptom Checklist-90-Revised (SCL-90R). A structural equation model was used to examine the effects of motor, cognitive, psychiatric, and SPS states on functionality. The multifactor structural model fit well descriptively. Cognitive and motor states uniquely and significantly predicted function in HD; however, neither psychiatric nor SPS states did. SPS was, however, significantly related to motor, cognitive, and psychiatric states, suggesting that it may bridge the correlative gap between psychiatric and cognitive states in HD. SPS may be worth assessing in conjunction with the standard gamut of clinical assessments in HD. Suggestions for future research and implications for patients, families, caregivers, and clinicians are discussed.

  7. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kuo -Ling; Mehrotra, Sanjay

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  9. An unbalanced spectra classification method based on entropy

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  10. REMO poor man's reanalysis

    NASA Astrophysics Data System (ADS)

    Ries, H.; Moseley, C.; Haensler, A.

    2012-04-01

    Reanalyses depict the state of the atmosphere as a best fit in space and time of many atmospheric observations in a physically consistent way. By essentially solving the data assimilation problem in a very accurate manner, reanalysis results can be used as reference for model evaluation procedures and as forcing data sets for different model applications. However, the spatial resolution of the most common and accepted reanalysis data sets (e.g. JRA25, ERA-Interim) ranges from approximately 124 km to 80 km. This resolution is too coarse to simulate certain small scale processes often associated with extreme events. In addition, many models need higher resolved forcing data ( e.g. land-surface models, tools for identifying and assessing hydrological extremes). Therefore we downscaled the ERA-Interim reanalysis over the EURO-CORDEX-Domain for the time period 1989 to 2008 to a horizontal resolution of approximately 12 km. The downscaling is performed by nudging REMO-simulations to lower and lateral boundary conditions of the reanalysis, and by re-initializing the model every 24 hours ("REMO in forecast mode"). In this study the three following questions will be addressed: 1.) Does the REMO poor man's reanalysis meet the needs (accuracy, extreme value distribution) in validation and forcing? 2.) What lessons can be learned about the model used for downscaling? As REMO is used as a pure downscaling procedure, any systematic deviations from ERA-Interim result from poor process modelling but not from predictability limitations. 3.) How much small scale information generated by the downscaling model is lost with frequent initializations? A comparison to a simulation that is performed in climate mode will be presented.

  11. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  12. Kinematical simulation of robotic complex operation for implementing full-scale additive technologies of high-end materials, composites, structures, and buildings

    NASA Astrophysics Data System (ADS)

    Antsiferov, S. I.; Eltsov, M. Iu; Khakhalev, P. A.

    2018-03-01

    This paper considers a newly designed electronic digital model of a robotic complex for implementing full-scale additive technologies, funded under a Federal Target Program. The electronic and digital model was used to solve the problem of simulating the movement of a robotic complex using the NX CAD/CAM/CAE system. The virtual mechanism was built and the main assemblies, joints, and drives were identified as part of solving the problem. In addition, the maximum allowed printable area size was identified for the robotic complex, and a simulation of printing a rectangular-shaped article was carried out.

  13. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  14. Telehealth Problem-Solving Therapy for Depressed Low-Income Homebound Older Adults: Acceptance and Preliminary Efficacy

    PubMed Central

    Choi, Namkee G.; Hegel, Mark T.; Nathan Marti, C.; Mary Lynn Marinucci, M.S.S.W.; Leslie Sirrianni, M.S.S.W.; Bruce, Martha L.

    2012-01-01

    Objective To evaluate the acceptance and preliminary efficacy of in-home telehealth delivery of problem-solving therapy (tele-PST) among depressed low-income homebound older adults in a pilot randomized control trial (RCT) designed to test its feasibility and preliminary efficacy. Methods 121 homebound individuals who were age 50+ and scored 15+ on the 24-item Hamilton Rating Scale for Depression (HAMD) participated in the 3-arm RCT, comparing tele-PST to in-person PST and telephone support calls. Six sessions of the PST-PC (primary care) were conducted for the PST participants. For tele-PST, second through sixth sessions were conducted via Skype video call. Acceptance of tele-PST or in-person PST was measured with the 11-item, 7-point scale modified Treatment Evaluation Inventory (TEI). Mixed-effect regression analysis was used to examine the effects of treatment group, time, and the interaction term between treatment group and time on the HAMD scores. Results The TEI score was slightly higher among tele-PST participants than among in-person PST participants. The HAMD scores of tele-PST participants and in-person PST participants at 12-week follow-up were significantly lower than the HAMD scores of telephone support call participants, and the treatment effects were maintained at 24-week follow-up. The HAMD scores of tele-PST participants did not differ from those of in-person PST participants. Conclusions Despite their initial skepticism, almost all participants had extremely positive attitudes toward tele-PST at 12-week follow-up. Tele-PST also appears to be an efficacious treatment modality for depressed homebound older adults and to have significant potential to facilitate their access to treatment. PMID:23567376

  15. A multilevel correction adaptive finite element method for Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  16. Identification of cloud fields by the nonparametric algorithm of pattern recognition from normalized video data recorded with the AVHRR instrument

    NASA Astrophysics Data System (ADS)

    Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.

    2002-02-01

    The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.

  17. The association of minor and major depression with health problem-solving and diabetes self-care activities in a clinic-based population of adults with type 2 diabetes mellitus.

    PubMed

    Shin, Na; Hill-Briggs, Felicia; Langan, Susan; Payne, Jennifer L; Lyketsos, Constantine; Golden, Sherita Hill

    2017-05-01

    We examined whether problem-solving and diabetes self-management behaviors differ by depression diagnosis - major depressive disorder (MDD) and minor depressive disorder (MinDD) - in adults with Type 2 diabetes (T2DM). We screened a clinical sample of 702 adults with T2DM for depression, identified 52 positive and a sample of 51 negative individuals, and performed a structured diagnostic psychiatric interview. MDD (n=24), MinDD (n=17), and no depression (n=62) were diagnosed using Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) Text Revised criteria. Health Problem-Solving Scale (HPSS) and Summary of Diabetes Self-Care Activities (SDSCA) questionnaires determined problem-solving and T2DM self-management skills, respectively. We compared HPSS and SDSCA scores by depression diagnosis, adjusting for age, sex, race, and diabetes duration, using linear regression. Total HPSS scores for MDD (β=-4.38; p<0.001) and MinDD (β=-2.77; p<0.01) were lower than no depression. Total SDSCA score for MDD (β=-10.1; p<0.01) was lower than for no depression, and was partially explained by total HPSS. MinDD and MDD individuals with T2DM have impaired problem-solving ability. MDD individuals had impaired diabetes self-management, partially explained by impaired problem-solving. Future studies should assess problem-solving therapy to treat T2DM and MinDD and integrated problem-solving with diabetes self-management for those with T2DM and MDD. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Lesion mapping of social problem solving.

    PubMed

    Barbey, Aron K; Colom, Roberto; Paul, Erick J; Chau, Aileen; Solomon, Jeffrey; Grafman, Jordan H

    2014-10-01

    Accumulating neuroscience evidence indicates that human intelligence is supported by a distributed network of frontal and parietal regions that enable complex, goal-directed behaviour. However, the contributions of this network to social aspects of intellectual function remain to be well characterized. Here, we report a human lesion study (n = 144) that investigates the neural bases of social problem solving (measured by the Everyday Problem Solving Inventory) and examine the degree to which individual differences in performance are predicted by a broad spectrum of psychological variables, including psychometric intelligence (measured by the Wechsler Adult Intelligence Scale), emotional intelligence (measured by the Mayer, Salovey, Caruso Emotional Intelligence Test), and personality traits (measured by the Neuroticism-Extraversion-Openness Personality Inventory). Scores for each variable were obtained, followed by voxel-based lesion-symptom mapping. Stepwise regression analyses revealed that working memory, processing speed, and emotional intelligence predict individual differences in everyday problem solving. A targeted analysis of specific everyday problem solving domains (involving friends, home management, consumerism, work, information management, and family) revealed psychological variables that selectively contribute to each. Lesion mapping results indicated that social problem solving, psychometric intelligence, and emotional intelligence are supported by a shared network of frontal, temporal, and parietal regions, including white matter association tracts that bind these areas into a coordinated system. The results support an integrative framework for understanding social intelligence and make specific recommendations for the application of the Everyday Problem Solving Inventory to the study of social problem solving in health and disease. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Is Trait Rumination Associated with the Ability to Generate Effective Problem Solving Strategies? Utilizing Two Versions of the Means-Ends Problem-Solving Test.

    PubMed

    Hasegawa, Akira; Nishimura, Haruki; Mastuda, Yuko; Kunisato, Yoshihiko; Morimoto, Hiroshi; Adachi, Masaki

    This study examined the relationship between trait rumination and the effectiveness of problem solving strategies as assessed by the Means-Ends Problem-Solving Test (MEPS) in a nonclinical population. The present study extended previous studies in terms of using two instructions in the MEPS: the second-person, actual strategy instructions, which has been utilized in previous studies on rumination, and the third-person, ideal-strategy instructions, which is considered more suitable for assessing the effectiveness of problem solving strategies. We also replicated the association between rumination and each dimension of the Social Problem-Solving Inventory-Revised Short Version (SPSI-R:S). Japanese undergraduate students ( N  = 223) completed the Beck Depression Inventory-Second Edition, Ruminative Responses Scale (RRS), MEPS, and SPSI-R:S. One half of the sample completed the MEPS with the second-person, actual strategy instructions. The other participants completed the MEPS with the third-person, ideal-strategy instructions. The results showed that neither total RRS score, nor its subscale scores were significantly correlated with MEPS scores under either of the two instructions. These findings taken together with previous findings indicate that in nonclinical populations, trait rumination is not related to the effectiveness of problem solving strategies, but that state rumination while responding to the MEPS deteriorates the quality of strategies. The correlations between RRS and SPSI-R:S scores indicated that trait rumination in general, and its brooding subcomponent in particular are parts of cognitive and behavioral responses that attempt to avoid negative environmental and negative private events. Results also showed that reflection is a part of active problem solving.

  20. An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures

    NASA Technical Reports Server (NTRS)

    Cohl, H.

    1994-01-01

    We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.

  1. Can microbes economically remove sulfur

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, J.L.

    Researchers have reported that refiners who now rely on costly physic-chemical procedures to desulfurize petroleum will soon have an alternative microbial-enzyme-based approach to this process. This new approach is still under development and considerable number chemical engineering problems need to be solved before this process is ready for large-scale use. This paper reviews the several research projects dedicated solving the problems that keep a biotechnology-based alternative from competing with chemical desulfurization.

  2. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  3. Can compactifications solve the cosmological constant problem?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertzberg, Mark P.; Center for Theoretical Physics, Department of Physics,Massachusetts Institute of Technology,77 Massachusetts Ave, Cambridge, MA 02139; Masoumi, Ali

    2016-06-30

    Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain whymore » Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.« less

  4. Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements

    NASA Astrophysics Data System (ADS)

    Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.

    2000-11-01

    In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.

  5. The inverse problem of sensing the mass and force induced by an adsorbate on a beam nanomechanical resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yun; Zhang, Yin

    2016-06-08

    The mass sensing superiority of a micro/nanomechanical resonator sensor over conventional mass spectrometry has been, or at least, is being firmly established. Because the sensing mechanism of a mechanical resonator sensor is the shifts of resonant frequencies, how to link the shifts of resonant frequencies with the material properties of an analyte formulates an inverse problem. Besides the analyte/adsorbate mass, many other factors such as position and axial force can also cause the shifts of resonant frequencies. The in-situ measurement of the adsorbate position and axial force is extremely difficult if not impossible, especially when an adsorbate is as smallmore » as a molecule or an atom. Extra instruments are also required. In this study, an inverse problem of using three resonant frequencies to determine the mass, position and axial force is formulated and solved. The accuracy of the inverse problem solving method is demonstrated and how the method can be used in the real application of a nanomechanical resonator is also discussed. Solving the inverse problem is helpful to the development and application of mechanical resonator sensor on two things: reducing extra experimental equipments and achieving better mass sensing by considering more factors.« less

  6. Dual Task of Fine Motor Skill and Problem Solving in Individuals With Multiple Sclerosis: A Pilot Study.

    PubMed

    Goverover, Y; Sandroff, B M; DeLuca, J

    2018-04-01

    To (1) examine and compare dual-task performance in patients with multiple sclerosis (MS) and healthy controls (HCs) using mathematical problem-solving questions that included an everyday competence component while performing an upper extremity fine motor task; and (2) examine whether difficulties in dual-task performance are associated with problems in performing an everyday internet task. Pilot study, mixed-design with both a within and between subjects' factor. A nonprofit rehabilitation research institution and the community. Participants (N=38) included persons with MS (n=19) and HCs (n=19) who were recruited from a nonprofit rehabilitation research institution and from the community. Not applicable. Participant were presented with 2 testing conditions: (1) solving mathematical everyday problems or placing bolts into divots (single-task condition); and (2) solving problems while putting bolts into divots (dual-task condition). Additionally, participants were required to perform a test of everyday internet competence. As expected, dual-task performance was significantly worse than either of the single-task tasks (ie, number of bolts into divots or correct answers, and time to answer the questions). Cognitive but not motor dual-task cost was associated with worse performance in activities of everyday internet tasks. Cognitive dual-task cost is significantly associated with worse performance of everyday technology. This was not observed in the motor dual-task cost. The implications of dual-task costs on everyday activity are discussed. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. A Green's function method for local and non-local parallel transport in general magnetic fields

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, Diego; Chacón, Luis

    2009-11-01

    The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), χ, and the perpendicular, χ, conductivities (χ/χ may exceed 10^10 in fusion plasmas); (ii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates; and (iii) Nonlocal parallel transport in the limit of small collisionality. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields. The numerical implementation employs a volume-preserving field-line integrator [Finn and Chac'on, Phys. Plasmas, 12 (2005)] for an accurate representation of the magnetic field lines regardless of the level of stochasticity. The general formalism and its algorithmic properties are discussed along with illustrative analytical and numerical examples. Problems of particular interest include: the departures from the Rochester--Rosenbluth diffusive scaling in the weak magnetic chaos regime, the interplay between non-locality and chaos, and the robustness of transport barriers in reverse shear configurations.

  8. Suppliers solve processing problems

    USDA-ARS?s Scientific Manuscript database

    The year's IFT food expo showcased numerous companies and organizations offering solutions to food processing needs and challenges. From small-scale unit operations to commercial-scale equipment lines, exhibitors highlighted both traditional and novel food processing operations fro food product dev...

  9. Solving Campus Parking Shortages: New Solutions for an Old Problem

    ERIC Educational Resources Information Center

    Millard-Ball, Adam; Siegman, Patrick; Tumlin, Jeffrey

    2004-01-01

    Universities and colleges across the country are faced with growth in the campus population and the loss of surface parking lots for new buildings. The response of many institutions is to build new garages with the assumption that parking demand ratios will remain the same. Such an approach, however, can be extremely expensive--upwards of …

  10. Exploring the Extreme: High Performance Learning Activities in Mathematics, Science and Technology.

    ERIC Educational Resources Information Center

    2003

    This educator guide for grades K-4 and 5-8 presents the basic science of aeronautics by emphasizing hands-on involvement, prediction, data collections and interpretation, teamwork, and problem solving. Activities include: (1) Finding the Center of Gravity Using Rulers; (2) Finding the Center of Gravity Using Plumb Lines; (3) Changing the Center of…

  11. Using LEGO Blocks for Technology-Mediated Task-Based English Language Learning

    ERIC Educational Resources Information Center

    Gadomska, Agnieszka

    2015-01-01

    LEGO blocks have been played with by generations of children worldwide since the 1950s. It is undeniable that they boost creativity, eye-hand coordination, focus, planning, problem solving and many other skills. LEGO bricks have been also used by educators across the curricula as they are extremely motivating and engaging and, in effect, make…

  12. Exploring the Extreme: High Performance Learning Activities in Mathematics, Science and Technology. An Educator's Guide. EG-2002-10-001-DFRC

    ERIC Educational Resources Information Center

    Dana, Judi; Kock, Meri; Lewis, Mike; Peterson, Bruce; Stowe, Steve

    2010-01-01

    The many activities contained in this teaching guide emphasize hands-on involvement, prediction, data collection and interpretation, teamwork, and problem solving. The guide also contains background information about aeronautical research that can help students learn how airplanes fly. Following the background sections are a series of activities…

  13. Example Based Pedagogical Strategies in a Computer Science Intelligent Tutoring System

    ERIC Educational Resources Information Center

    Green, Nicholas

    2017-01-01

    Worked-out examples are a common teaching strategy that aids learners in understanding concepts by use of step-by-step instruction. Literature has shown that they can be extremely beneficial, with a large body of material showing they can provide benefits over regular problem solving alone. This research looks into the viability of using this…

  14. Strengthening Preceptors' Competency in Thai Clinical Nursing

    ERIC Educational Resources Information Center

    Mingpun, Renu; Srisa-ard, Boonchom; Jumpamool, Apinya

    2015-01-01

    The problem of lack of nurses can be solved by employing student nurses. Obviously, nurse instructors and preceptors have to work extremely hard to train student nurses to meet the standard of nursing. The preceptorship model is yet to be explored as to what it means to have an effective program or the requisite skills to be an effective…

  15. Nonlinear and Stochastic Dynamics in the Heart

    PubMed Central

    Qu, Zhilin; Hu, Gang; Garfinkel, Alan; Weiss, James N.

    2014-01-01

    In a normal human life span, the heart beats about 2 to 3 billion times. Under diseased conditions, a heart may lose its normal rhythm and degenerate suddenly into much faster and irregular rhythms, called arrhythmias, which may lead to sudden death. The transition from a normal rhythm to an arrhythmia is a transition from regular electrical wave conduction to irregular or turbulent wave conduction in the heart, and thus this medical problem is also a problem of physics and mathematics. In the last century, clinical, experimental, and theoretical studies have shown that dynamical theories play fundamental roles in understanding the mechanisms of the genesis of the normal heart rhythm as well as lethal arrhythmias. In this article, we summarize in detail the nonlinear and stochastic dynamics occurring in the heart and their links to normal cardiac functions and arrhythmias, providing a holistic view through integrating dynamics from the molecular (microscopic) scale, to the organelle (mesoscopic) scale, to the cellular, tissue, and organ (macroscopic) scales. We discuss what existing problems and challenges are waiting to be solved and how multi-scale mathematical modeling and nonlinear dynamics may be helpful for solving these problems. PMID:25267872

  16. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  17. Scale factor management in the studies of affine models of shockproof garment elements

    NASA Astrophysics Data System (ADS)

    Denisov, Oleg; Pleshko, Mikhail; Ponomareva, Irina; Merenyashev, Vitaliy

    2018-03-01

    New samples of protective garment for performing construction work at height require numerous tests in conditions close to real conditions of extreme vital activity. The article presents some results of shockproof garment element studies and a description of a patented prototype. The tests were carried out on a model which geometric dimensions were convenient for manufacturing it in a limited batch. In addition, the used laboratory equipment (for example, a unique power pendulum), blanks made of a titanium-nickel alloy with a shape memory effect also imposed their limitations. The problem of the adequacy of the obtained experimental results transfer to mass-produced products was solved using tools of the classical similarity theory. Scale factor management influence in the affine modeling of the shockproof element, studied on the basis of the equiatomic titanium-nickel alloy with the shape memory effect, allowed us to assume, with a sufficient degree of reliability, the technical possibility of extrapolating the results of experimental studies to full-scale objects for the formation of the initial data of the mathematical model of shockproof garment dynamics elastoplastic deformation (while observing the similarity of the features of external loading).

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Christopher

    In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].

  19. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    PubMed

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.

  20. Multimodal Logistics Network Design over Planning Horizon through a Hybrid Meta-Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki; Yamazaki, Yoshihiro; Wada, Takeshi

    Logistics has been acknowledged increasingly as a key issue of supply chain management to improve business efficiency under global competition and diversified customer demands. This study aims at improving a quality of strategic decision making associated with dynamic natures in logistics network optimization. Especially, noticing an importance to concern with a multimodal logistics under multiterms, we have extended a previous approach termed hybrid tabu search (HybTS). The attempt intends to deploy a strategic planning more concretely so that the strategic plan can link to an operational decision making. The idea refers to a smart extension of the HybTS to solve a dynamic mixed integer programming problem. It is a two-level iterative method composed of a sophisticated tabu search for the location problem at the upper level and a graph algorithm for the route selection at the lower level. To keep efficiency while coping with the resulting extremely large-scale problem, we invented a systematic procedure to transform the original linear program at the lower-level into a minimum cost flow problem solvable by the graph algorithm. Through numerical experiments, we verified the proposed method outperformed the commercial software. The results indicate the proposed approach can make the conventional strategic decision much more practical and is promising for real world applications.

  1. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  2. Demonstration of quantum advantage in machine learning

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  3. Good vibrations: Controlling light with sound (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Eggleton, Benjamin J.; Choudhary, Amol

    2016-10-01

    One of the surprises of nonlinear optics, is that light may interact strongly with sound. Intense laser light literally "shakes" the glass in optical fibres, exciting acoustic waves (sound) in the fibre. Under the right conditions, it leads to a positive feedback loop between light and sound termed "Stimulated Brillouin Scattering," or simply SBS. This nonlinear interaction can amplify or filter light waves with extreme precision in frequency which makes it uniquely suited to solve key problems in the fields of defence, biomedicine, wireless communications, spectroscopy and imaging. We have achieved the first demonstration of SBS in compact chip-scale structures, carefully designed so that the optical fields and the acoustic fields are simultaneously confined and guided. This new platform has opened a range of new functionalities that are being applied in communications and defence with breathtaking performance and compactness. My talk will introduce this new field and review our progress and achievements, including silicon based optical phononic processor.

  4. A resilient domain decomposition polynomial chaos solver for uncertain elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Mycek, Paul; Contreras, Andres; Le Maître, Olivier; Sargsyan, Khachik; Rizzi, Francesco; Morris, Karla; Safta, Cosmin; Debusschere, Bert; Knio, Omar

    2017-07-01

    A resilient method is developed for the solution of uncertain elliptic PDEs on extreme scale platforms. The method is based on a hybrid domain decomposition, polynomial chaos (PC) framework that is designed to address soft faults. Specifically, parallel and independent solves of multiple deterministic local problems are used to define PC representations of local Dirichlet boundary-to-boundary maps that are used to reconstruct the global solution. A LAD-lasso type regression is developed for this purpose. The performance of the resulting algorithm is tested on an elliptic equation with an uncertain diffusivity field. Different test cases are considered in order to analyze the impacts of correlation structure of the uncertain diffusivity field, the stochastic resolution, as well as the probability of soft faults. In particular, the computations demonstrate that, provided sufficiently many samples are generated, the method effectively overcomes the occurrence of soft faults.

  5. Laser Tailoring the Surface Chemistry and Morphology for Wear, Scale and Corrosion Resistant Superhydrophobic Coatings.

    PubMed

    Boinovich, Ludmila B; Emelyanenko, Kirill A; Domantovsky, Alexander G; Emelyanenko, Alexandre M

    2018-06-04

    A strategy, combining laser chemical modification with laser texturing, followed by chemisorption of the fluorinated hydrophobic agent was used to fabricate the series of superhydrophobic coatings on an aluminum alloy with varied chemical compositions and parameters of texture. It was shown that high content of aluminum oxynitride and aluminum oxide formed in the surface layer upon laser treatment allows solving the problem of enhancement of superhydrophobic coating resistance to abrasive loads. Besides, the multimodal structure of highly porous surface layer leads to self-healing ability of fabricated coatings. Long-term behavior of designed coatings in "hard" hot water with an essential content of calcium carbonate demonstrated high antiscaling resistance with self-cleaning potential against solid deposits onto the superhydrophobic surfaces. Study of corrosion protection properties and the behavior of coatings at long-term contact with 0.5 M NaCl solution indicated extremely high chemical stability and remarkable anticorrosion properties.

  6. Coupling LAMMPS with Lattice Boltzmann fluid solver: theory, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Tan, Jifu; Sinno, Talid; Diamond, Scott

    2016-11-01

    Studying of fluid flow coupled with solid has many applications in biological and engineering problems, e.g., blood cell transport, particulate flow, drug delivery. We present a partitioned approach to solve the coupled Multiphysics problem. The fluid motion is solved by the Lattice Boltzmann method, while the solid displacement and deformation is simulated by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). The coupling is achieved through the immersed boundary method so that the expensive remeshing step is eliminated. The code can model both rigid and deformable solids. The code also shows very good scaling results. It was validated with classic problems such as migration of rigid particles, ellipsoid particle's orbit in shear flow. Examples of the applications in blood flow, drug delivery, platelet adhesion and rupture are also given in the paper. NIH.

  7. Standard Model—axion—seesaw—Higgs portal inflation. Five problems of particle physics and cosmology solved in one stroke

    NASA Astrophysics Data System (ADS)

    Ballesteros, Guillermo; Redondo, Javier; Ringwald, Andreas; Tamarit, Carlos

    2017-08-01

    We present a minimal extension of the Standard Model (SM) providing a consistent picture of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today. Three right-handed neutrinos Ni, a new color triplet Q and a complex SM-singlet scalar σ, whose vacuum expectation value vσ ~ 1011 GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously, are added to the SM. At low energies, the model reduces to the SM, augmented by seesaw generated neutrino masses and mixing, plus the axion. The latter solves the strong CP problem and accounts for the cold dark matter in the Universe. The inflaton is comprised by a mixture of σ and the SM Higgs, and reheating of the Universe after inflation proceeds via the Higgs portal. Baryogenesis occurs via thermal leptogenesis. Thus, five fundamental problems of particle physics and cosmology are solved at one stroke in this unified Standard Model—axion—seesaw—Higgs portal inflation (SMASH) model. It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.

  8. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  9. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  10. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  11. Probabilistic forecasting of extreme weather events based on extreme value theory

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert

    2016-04-01

    Extreme events in weather and climate such as high wind gusts, heavy precipitation or extreme temperatures are commonly associated with high impacts on both environment and society. Forecasting extreme weather events is difficult, and very high-resolution models are needed to describe explicitly extreme weather phenomena. A prediction system for such events should therefore preferably be probabilistic in nature. Probabilistic forecasts and state estimations are nowadays common in the numerical weather prediction community. In this work, we develop a new probabilistic framework based on extreme value theory that aims to provide early warnings up to several days in advance. We consider the combined events when an observation variable Y (for instance wind speed) exceeds a high threshold y and its corresponding deterministic forecasts X also exceeds a high forecast threshold y. More specifically two problems are addressed:} We consider pairs (X,Y) of extreme events where X represents a deterministic forecast, and Y the observation variable (for instance wind speed). More specifically two problems are addressed: Given a high forecast X=x_0, what is the probability that Y>y? In other words: provide inference on the conditional probability: [ Pr{Y>y|X=x_0}. ] Given a probabilistic model for Problem 1, what is the impact on the verification analysis of extreme events. These problems can be solved with bivariate extremes (Coles, 2001), and the verification analysis in (Ferro, 2007). We apply the Ramos and Ledford (2009) parametric model for bivariate tail estimation of the pair (X,Y). The model accommodates different types of extremal dependence and asymmetry within a parsimonious representation. Results are presented using the ensemble reforecast system of the European Centre of Weather Forecasts (Hagedorn, 2008). Coles, S. (2001) An Introduction to Statistical modelling of Extreme Values. Springer-Verlag.Ferro, C.A.T. (2007) A probability model for verifying deterministic forecasts of extreme events. Wea. Forecasting {22}, 1089-1100.Hagedorn, R. (2008) Using the ECMWF reforecast dataset to calibrate EPS forecasts. ECMWF Newsletter, {117}, 8-13.Ramos, A., Ledford, A. (2009) A new class of models for bivariate joint tails. J.R. Statist. Soc. B {71}, 219-241.

  12. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  13. Artificial Intelligence (AI), Operations Research (OR), and Decision Support Systems (DSS): A conceptual framework

    NASA Technical Reports Server (NTRS)

    Parnell, Gregory S.; Rowell, William F.; Valusek, John R.

    1987-01-01

    In recent years there has been increasing interest in applying the computer based problem solving techniques of Artificial Intelligence (AI), Operations Research (OR), and Decision Support Systems (DSS) to analyze extremely complex problems. A conceptual framework is developed for successfully integrating these three techniques. First, the fields of AI, OR, and DSS are defined and the relationships among the three fields are explored. Next, a comprehensive adaptive design methodology for AI and OR modeling within the context of a DSS is described. These observations are made: (1) the solution of extremely complex knowledge problems with ill-defined, changing requirements can benefit greatly from the use of the adaptive design process, (2) the field of DSS provides the focus on the decision making process essential for tailoring solutions to these complex problems, (3) the characteristics of AI, OR, and DSS tools appears to be converging rapidly, and (4) there is a growing need for an interdisciplinary AI/OR/DSS education.

  14. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2017-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  15. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2018-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  16. Problem—solving counseling as a therapeutic tool on youth suicidal behavior in the suburban population in Sri Lanka

    PubMed Central

    Perera, E. A. Ramani; Kathriarachchi, Samudra T.

    2011-01-01

    Background: Suicidal behaviour among youth is a major public health concern in Sri Lanka. Prevention of youth suicides using effective, feasible and culturally acceptable methods is invaluable in this regard, however research in this area is grossly lacking. Objective: This study aimed at determining the effectiveness of problem solving counselling as a therapeutic intervention in prevention of youth suicidal behaviour in Sri Lanka. Setting and design: This control trial study was based on hospital admissions with suicidal attempts in a sub-urban hospital in Sri Lanka. The study was carried out at Base Hospital Homagama. Materials and Methods: A sample of 124 was recruited using convenience sampling method and divided into two groups, experimental and control. Control group was offered routine care and experimental group received four sessions of problem solving counselling over one month. Outcome of both groups was measured, six months after the initial screening, using the visual analogue scale. Results: Individualized outcome measures on problem solving counselling showed that problem solving ability among the subjects in the experimental group had improved after four counselling sessions and suicidal behaviour has been reduced. The results are statistically significant. Conclusion: This Study confirms that problem solving counselling is an effective therapeutic tool in management of youth suicidal behaviour in hospital setting in a developing country. PMID:21431005

  17. Everyday Expertise in Self-Management of Diabetes in the Dominican Republic: Implications for Learning and Performance Support Systems Design

    ERIC Educational Resources Information Center

    Reyes Paulino, Lisette G.

    2012-01-01

    An epidemic such as diabetes is an extremely complex public health, economic and social problem that is difficult to solve through medical expertise alone. Evidence-based models for improving healthcare delivery systems advocate educating patients to become more active participants in their own care. This shift demands preparing chronically ill…

  18. Solving satisfiability problems using a novel microarray-based DNA computer.

    PubMed

    Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning

    2007-01-01

    An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.

  19. [Optimal solution and analysis of muscular force during standing balance].

    PubMed

    Wang, Hongrui; Zheng, Hui; Liu, Kun

    2015-02-01

    The present study was aimed at the optimal solution of the main muscular force distribution in the lower extremity during standing balance of human. The movement musculoskeletal system of lower extremity was simplified to a physical model with 3 joints and 9 muscles. Then on the basis of this model, an optimum mathematical model was built up to solve the problem of redundant muscle forces. Particle swarm optimization (PSO) algorithm is used to calculate the single objective and multi-objective problem respectively. The numerical results indicated that the multi-objective optimization could be more reasonable to obtain the distribution and variation of the 9 muscular forces. Finally, the coordination of each muscle group during maintaining standing balance under the passive movement was qualitatively analyzed using the simulation results obtained.

  20. Technology Use for Diabetes Problem Solving in Adolescents with Type 1 Diabetes: Relationship to Glycemic Control

    PubMed Central

    Kumah-Crystal, Yaa A.; Hood, Korey K.; Ho, Yu-Xian; Lybarger, Cindy K.; O'Connor, Brendan H.; Rothman, Russell L.

    2015-01-01

    Abstract Background: This study examines technology use for problem solving in diabetes and its relationship to hemoglobin A1C (A1C). Subjects and Methods: A sample of 112 adolescents with type 1 diabetes completed measures assessing use of technologies for diabetes problem solving, including mobile applications, social technologies, and glucose software. Hierarchical regression was performed to identify the contribution of a new nine-item Technology Use for Problem Solving in Type 1 Diabetes (TUPS) scale to A1C, considering known clinical contributors to A1C. Results: Mean age for the sample was 14.5 (SD 1.7) years, mean A1C was 8.9% (SD 1.8%), 50% were female, and diabetes duration was 5.5 (SD 3.5) years. Cronbach's α reliability for TUPS was 0.78. In regression analyses, variables significantly associated with A1C were the socioeconomic status (β=−0.26, P<0.01), Diabetes Adolescent Problem Solving Questionnaire (β=−0.26, P=0.01), and TUPS (β=0.26, P=0.01). Aside from the Diabetes Self-Care Inventory—Revised, each block added significantly to the model R2. The final model R2 was 0.22 for modeling A1C (P<0.001). Conclusions: Results indicate a counterintuitive relationship between higher use of technologies for problem solving and higher A1C. Adolescents with poorer glycemic control may use technology in a reactive, as opposed to preventive, manner. Better understanding of the nature of technology use for self-management over time is needed to guide the development of technology-mediated problem solving tools for youth with type 1 diabetes. PMID:25826706

  1. The implementation of multiple intelligences based teaching model to improve mathematical problem solving ability for student of junior high school

    NASA Astrophysics Data System (ADS)

    Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli

    2017-05-01

    This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.

  2. Technology Use for Diabetes Problem Solving in Adolescents with Type 1 Diabetes: Relationship to Glycemic Control.

    PubMed

    Kumah-Crystal, Yaa A; Hood, Korey K; Ho, Yu-Xian; Lybarger, Cindy K; O'Connor, Brendan H; Rothman, Russell L; Mulvaney, Shelagh A

    2015-07-01

    This study examines technology use for problem solving in diabetes and its relationship to hemoglobin A1C (A1C). A sample of 112 adolescents with type 1 diabetes completed measures assessing use of technologies for diabetes problem solving, including mobile applications, social technologies, and glucose software. Hierarchical regression was performed to identify the contribution of a new nine-item Technology Use for Problem Solving in Type 1 Diabetes (TUPS) scale to A1C, considering known clinical contributors to A1C. Mean age for the sample was 14.5 (SD 1.7) years, mean A1C was 8.9% (SD 1.8%), 50% were female, and diabetes duration was 5.5 (SD 3.5) years. Cronbach's α reliability for TUPS was 0.78. In regression analyses, variables significantly associated with A1C were the socioeconomic status (β = -0.26, P < 0.01), Diabetes Adolescent Problem Solving Questionnaire (β = -0.26, P = 0.01), and TUPS (β = 0.26, P = 0.01). Aside from the Diabetes Self-Care Inventory--Revised, each block added significantly to the model R(2). The final model R(2) was 0.22 for modeling A1C (P < 0.001). Results indicate a counterintuitive relationship between higher use of technologies for problem solving and higher A1C. Adolescents with poorer glycemic control may use technology in a reactive, as opposed to preventive, manner. Better understanding of the nature of technology use for self-management over time is needed to guide the development of technology-mediated problem solving tools for youth with type 1 diabetes.

  3. Phase Transitions in Planning Problems: Design and Analysis of Parameterized Families of Hard Planning Problems

    NASA Technical Reports Server (NTRS)

    Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide

    2014-01-01

    There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.

  4. An interactive problem-solving approach to teach traumatology for medical students.

    PubMed

    Abu-Zidan, Fikri M; Elzubeir, Margaret A

    2010-08-13

    We aimed to evaluate an interactive problem-solving approach for teaching traumatology from perspectives of students and consider its implications on Faculty development. A two hour problem-solving, interactive tutorial on traumatology was structured to cover main topics in trauma management. The tutorial was based on real cases covering specific topics and objectives. Seven tutorials (5-9 students in each) were given by the same tutor with the same format for fourth and fifth year medical students in Auckland and UAE Universities (n = 50). A 16 item questionnaire, on a 7 point Likert-type scale, focusing on educational tools, tutor-based skills, and student-centered skills were answered by the students followed by open ended comments. The tutorials were highly ranked by the students. The mean values of educational tools was the highest followed by tutor-centered skills and finally student-centered skills. There was a significant increase of the rating of studied attributes over time (F = 3.9, p = 0.004, ANOVA). Students' open ended comments were highly supportive of the interactive problem-solving approach for teaching traumatology. The interactive problem-solving approach for tutorials can be an effective enjoyable alternative or supplement to traditional instruction for teaching traumatology to medical students. Training for this approach should be encouraged for Faculty development.

  5. Problem solving, loneliness, depression levels and associated factors in high school adolescents.

    PubMed

    Sahin, Ummugulsum; Adana, Filiz

    2016-01-01

    To determine problem solving, loneliness, depression levels and associated factors in high school adolescents. This cross-sectional study was conducted in a city west of Turkey (Bursa) in a public high school and the population was 774 and the sampling was 394 students. Students to be included in the study were selected using the multiple sampling method. A personal Information Form with 23 questions, Problem Solving Inventory (PSI), Loneliness Scale (UCLA), Beck Depression Inventory (BDI) were used as data collection tools in the study. Basic statistical analyses, t-test, Kruskall Wallis-H, One Way Anova and Pearson Correlation test were used to evaluate the data. Necessary permissions were obtained from the relevant institution, students, parents and the ethical committee. The study found significant differences between "problem solving level" and family type, health assessment, life quality and mothers', fathers' siblings' closeness level; between "loneliness level" and gender, family income, health assessment, life quality and mothers', fathers', siblings' closeness level; between "depression level" and life quality, family income, fathers' closeness level. Unfavorable socio-economic and cultural conditions can have an effect on the problem solving, loneliness and depression levels of adolescents. Providing structured education to adolescents at risk under school mental health nursing practices is recommended.

  6. Modifying PASVART to solve singular nonlinear 2-point boundary problems

    NASA Technical Reports Server (NTRS)

    Fulton, James P.

    1988-01-01

    To study the buckling and post-buckling behavior of shells and various other structures, one must solve a nonlinear 2-point boundary problem. Since closed-form analytic solutions for such problems are virtually nonexistent, numerical approximations are inevitable. This makes the availability of accurate and reliable software indispensable. In a series of papers Lentini and Pereyra, expanding on the work of Keller, developed PASVART: an adaptive finite difference solver for nonlinear 2-point boundary problems. While the program does produce extremely accurate solutions with great efficiency, it is hindered by a major limitation. PASVART will only locate isolated solutions of the problem. In buckling problems, the solution set is not unique. It will contain singular or bifurcation points, where different branches of the solution set may intersect. Thus, PASVART is useless precisely when the problem becomes interesting. To resolve this deficiency we propose a modification of PASVART that will enable the user to perform a more complete bifurcation analysis. PASVART would be combined with the Thurston bifurcation solution: as adaptation of Newton's method that was motivated by the work of Koiter 3 are reinterpreted in terms of an iterative computational method by Thurston.

  7. Effect of differentiation of self on adolescent risk behavior: test of the theoretical model.

    PubMed

    Knauth, Donna G; Skowron, Elizabeth A; Escobar, Melicia

    2006-01-01

    Innovative theoretical models are needed to explain the occurrence of high-risk sexual behaviors, alcohol and other-drug (AOD) use, and academic engagement among ethnically diverse, inner-city adolescents. The aim of this study was to test the credibility of a theoretical model based on the Bowen family systems theory to explain adolescent risk behavior. Specifically tested was the relationship between the predictor variables of differentiation of self, chronic anxiety, and social problem solving and the dependent variables of high-risk sexual behaviors, AOD use, and academic engagement. An ex post facto cross-sectional design was used to test the usefulness of the theoretical model. Data were collected from 161 racially/ethnically diverse, inner-city high school students, 14 to 19 years of age. Participants completed self-report written questionnaires, including the Differentiation of Self Inventory, State-Trait Anxiety Inventory, Social Problem Solving for Adolescents, Drug Involvement Scale for Adolescents, and the Sexual Behavior Questionnaire. Consistent with the model, higher levels of differentiation of self related to lower levels of chronic anxiety (p < .001) and higher levels of social problem solving (p < .01). Higher chronic anxiety was related to lower social problem solving (p < .001). A test of mediation showed that chronic anxiety mediates the relationship between differentiation of self and social problem solving (p < .001), indicating that differentiation influences social problem solving through chronic anxiety. Higher levels of social problem solving were related to less drug use (p < .05), less high-risk sexual behaviors (p < .01), and an increase in academic engagement (p < .01). Findings support the theoretical model's credibility and provide evidence that differentiation of self is an important cognitive factor that enables adolescents to manage chronic anxiety and motivates them to use effective problem solving, resulting in less involvement in health-comprising behaviors and increased academic engagement.

  8. A hybrid nonlinear programming method for design optimization

    NASA Technical Reports Server (NTRS)

    Rajan, S. D.

    1986-01-01

    Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.

  9. Scalable software architectures for decision support.

    PubMed

    Musen, M A

    1999-12-01

    Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  10. Family problem solving interactions and 6-month symptomatic and functional outcomes in youth at ultra-high risk for psychosis and with recent onset psychotic symptoms: a longitudinal study.

    PubMed

    O'Brien, Mary P; Zinberg, Jamie L; Ho, Lorena; Rudd, Alexandra; Kopelowicz, Alex; Daley, Melita; Bearden, Carrie E; Cannon, Tyrone D

    2009-02-01

    This study prospectively examined the relationship between social problem solving behavior exhibited by youths at ultra-high risk for psychosis (UHR) and with recent onset psychotic symptoms and their parents during problem solving discussions, and youths' symptoms and social functioning six months later. Twenty-seven adolescents were administered the Structured Interview for Prodromal Syndromes and the Strauss-Carpenter Social Contact Scale at baseline and follow-up assessment. Primary caregivers participated with youth in a ten minute discussion that was videotaped, transcribed, and coded for how skillful participants were in defining problems, generating solutions, and reaching resolution, as well as how constructive and/or conflictual they were during the interaction. Controlling for social functioning at baseline, adolescents' skillful problem solving and constructive communication, and parents' constructive communication, were associated with youths' enhanced social functioning six months later. Controlling for symptom severity at baseline, we found that there was a positive association between adolescents' conflictual communications at baseline and an increase in positive symptoms six months later. Taken together, findings from this study provide support for further research into the possibility that specific family interventions, such as problem solving and communication skills training, may improve the functional prognosis of at-risk youth, especially in terms of their social functioning.

  11. Family problem solving interactions and 6-month symptomatic and functional outcomes in youth at ultra-high risk for psychosis and with recent onset psychotic symptoms: A longitudinal study

    PubMed Central

    O'Brien, Mary P.; Zinberg, Jamie L.; Ho, Lorena; Rudd, Alexandra; Kopelowicz, Alex; Daley, Melita; Bearden, Carrie E.; Cannon, Tyrone D.

    2009-01-01

    This study prospectively examined the relationship between social problem solving behavior exhibited by youths at ultra-high risk for psychosis (UHR) and with recent onset psychotic symptoms and their parents during problem solving discussions, and youths' symptoms and social functioning six months later. Twenty-seven adolescents were administered the Structured Interview for Prodromal Syndromes and the Strauss-Carpenter Social Contact Scale at baseline and follow-up assessment. Primary caregivers participated with youth in a ten minute discussion that was videotaped, transcribed, and coded for how skillful participants were in defining problems, generating solutions, and reaching resolution, as well as how constructive and/or conflictual they were during the interaction. Controlling for social functioning at baseline, adolescents' skillful problem solving and constructive communication, and parents' constructive communication, were associated with youths' enhanced social functioning six months later. Controlling for symptom severity at baseline, we found that there was a positive association between adolescents' conflictual communications at baseline and an increase in positive symptoms six months later. Taken together, findings from this study provide support for further research into the possibility that specificfamily interventions, such as problem solving and communication skills training, may improve the functional prognosis of at-risk youth, especially in terms of their social functioning. PMID:18996681

  12. Problem solving styles among people who use alcohol and other drugs in South Africa.

    PubMed

    Sorsdahl, Katherine; Stein, Dan J; Carrara, Henri; Myers, Bronwyn

    2014-01-01

    The present study examines the relationship between problem-solving styles, socio-demographic variables and risk of alcohol and other drug (AOD)-related problems among a South African population. The Social Problem-Solving Inventory-Revised, Center for Epidemiologic Studies Depression Scale (CES-D) and the Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) were administered to a convenience sample of 1000 respondents. According to the ASSIST, 32% and 49% of respondents met criteria for moderate to high risk of alcohol use and illicit drug use respectively. After adjusting for the effects of other variables in the model, respondents who were of "Coloured" ancestry (PR=1.20, 95% CI 1.0-1.4), male (PR=1.19, 95% CI 1.04-1.37), older (PR=1.01, 95% CI 1.00-1.02), who adopted an avoidance style of coping with problems (PR=1.03, 95% CI 1.01-1.05) and who met criteria for depression (PR=1.42, 95% CI 1.12-1.79) were more likely to be classified as having risky AOD use. This suggests that interventions to improve problem solving and provide people with cognitive strategies to cope better with their problems may hold promise for reducing risky AOD use. © 2013.

  13. Numerical Analysis of Flood modeling of upper Citarum River under Extreme Flood Condition

    NASA Astrophysics Data System (ADS)

    Siregar, R. I.

    2018-02-01

    This paper focuses on how to approach the numerical method and computation to analyse flood parameters. Water level and flood discharge are the flood parameters solved by numerical methods approach. Numerical method performed on this paper for unsteady flow conditions have strengths and weaknesses, among others easily applied to the following cases in which the boundary irregular flow. The study area is in upper Citarum Watershed, Bandung, West Java. This paper uses computation approach with Force2 programming and HEC-RAS to solve the flow problem in upper Citarum River, to investigate and forecast extreme flood condition. Numerical analysis based on extreme flood events that have occurred in the upper Citarum watershed. The result of water level parameter modeling and extreme flood discharge compared with measurement data to analyse validation. The inundation area about flood that happened in 2010 is about 75.26 square kilometres. Comparing two-method show that the FEM analysis with Force2 programs has the best approach to validation data with Nash Index is 0.84 and HEC-RAS that is 0.76 for water level. For discharge data Nash Index obtained the result analysis use Force2 is 0.80 and with use HEC-RAS is 0.79.

  14. Adaptive Fuzzy Output-Constrained Fault-Tolerant Control of Nonlinear Stochastic Large-Scale Systems With Actuator Faults.

    PubMed

    Li, Yongming; Ma, Zhiyao; Tong, Shaocheng

    2017-09-01

    The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.

  15. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  16. Linear solver performance in elastoplastic problem solution on GPU cluster

    NASA Astrophysics Data System (ADS)

    Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.

    2017-12-01

    Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.

  17. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    PubMed Central

    Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh

    2014-01-01

    This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359

  18. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  19. PetIGA: A framework for high-performance isogeometric analysis

    DOE PAGES

    Dalcin, Lisandro; Collier, Nathaniel; Vignal, Philippe; ...

    2016-05-25

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility ofmore » PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. Lastly, we show strong scaling results on up to 4096 cores, which confirm the suitability of PetIGA for large scale simulations.« less

  20. Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carleton, James Brian; Parks, Michael L.

    Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less

  1. Working with low back pain: problem-solving orientation and function.

    PubMed

    Shaw, W S; Feuerstein, M; Haufler, A J; Berkowitz, S M; Lopez, M S

    2001-08-01

    A number of ergonomic, workplace and individual psychosocial factors and health behaviors have been associated with the onset, exacerbation and/or maintenance of low back pain (LBP). The functional impact of these factors may be influenced by how a worker approaches problems in general. The present study was conducted to determine whether problem-solving orientation was associated with physical and mental health outcomes in fully employed workers (soldiers) reporting a history of LBP in the past year. The sample consisted of 475 soldiers (446 male, 29 female; mean age 24.5 years) who worked in jobs identified as high risk for LBP-related disability and reported LBP symptoms in the past 12 months. The Social Problem-Solving Inventory and the Standard Form-12 (SF-12) were completed by all subjects. Hierarchical multiple regression analyses were used to predict the SF-12 physical health summary scale from interactions of LBP symptoms with each of five problem-solving subscales. Low scores on positive problem-solving orientation (F(1,457)=4.49), and high scores on impulsivity/carelessness (F(1,457)=9.11) were associated with a steeper gradient in functional loss related to LBP. Among those with a longer history of low-grade LBP, an avoidant approach to problem-solving was also associated with a steeper gradient of functional loss (three-way interaction; F(1,458)=4.58). These results suggest that the prolonged impact of LBP on daily function may be reduced by assisting affected workers to conceptualize LBP as a problem that can be overcome and using strategies that promote taking an active role in reducing risks for LBP. Secondary prevention efforts may be improved by addressing these factors.

  2. Clinical and Cognitive Characteristics Associated with Mathematics Problem Solving in Adolescents with Autism Spectrum Disorder.

    PubMed

    Oswald, Tasha M; Beck, Jonathan S; Iosif, Ana-Maria; McCauley, James B; Gilhooly, Leslie J; Matter, John C; Solomon, Marjorie

    2016-04-01

    Mathematics achievement in autism spectrum disorder (ASD) has been understudied. However, the ability to solve applied math problems is associated with academic achievement, everyday problem-solving abilities, and vocational outcomes. The paucity of research on math achievement in ASD may be partly explained by the widely-held belief that most individuals with ASD are mathematically gifted, despite emerging evidence to the contrary. The purpose of the study was twofold: to assess the relative proportions of youth with ASD who demonstrate giftedness versus disability on applied math problems, and to examine which cognitive (i.e., perceptual reasoning, verbal ability, working memory) and clinical (i.e., test anxiety) characteristics best predict achievement on applied math problems in ASD relative to typically developing peers. Twenty-seven high-functioning adolescents with ASD and 27 age- and Full Scale IQ-matched typically developing controls were assessed on standardized measures of math problem solving, perceptual reasoning, verbal ability, and test anxiety. Results indicated that 22% of the ASD sample evidenced a mathematics learning disability, while only 4% exhibited mathematical giftedness. The parsimonious linear regression model revealed that the strongest predictor of math problem solving was perceptual reasoning, followed by verbal ability and test anxiety, then diagnosis of ASD. These results inform our theories of math ability in ASD and highlight possible targets of intervention for students with ASD struggling with mathematics. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  3. Students’ Representation in Mathematical Word Problem-Solving: Exploring Students’ Self-efficacy

    NASA Astrophysics Data System (ADS)

    Sahendra, A.; Budiarto, M. T.; Fuad, Y.

    2018-01-01

    This descriptive qualitative research aims at investigating student represented in mathematical word problem solving based on self-efficacy. The research subjects are two eighth graders at a school in Surabaya with equal mathematical ability consisting of two female students with high and low self-efficacy. The subjects were chosen based on the results of test of mathematical ability, documentation of the result of middle test in even semester of 2016/2017 academic year, and results of questionnaire of mathematics word problem in terms of self-efficacy scale. The selected students were asked to do mathematical word problem solving and be interviewed. The result of this study shows that students with high self-efficacy tend to use multiple representations of sketches and mathematical models, whereas students with low self-efficacy tend to use single representation of sketches or mathematical models only in mathematical word problem-solving. This study emphasizes that teachers should pay attention of student’s representation as a consideration of designing innovative learning in order to increase the self-efficacy of each student to achieve maximum mathematical achievement although it still requires adjustment to the school situation and condition.

  4. Comparison of the hedonic general Labeled Magnitude Scale with the hedonic 9-point scale.

    PubMed

    Kalva, Jaclyn J; Sims, Charles A; Puentes, Lorenzo A; Snyder, Derek J; Bartoshuk, Linda M

    2014-02-01

    The hedonic 9-point scale was designed to compare palatability among different food items; however, it has also been used occasionally to compare individuals and groups. Such comparisons can be invalid because scale labels (for example, "like extremely") can denote systematically different hedonic intensities across some groups. Addressing this problem, the hedonic general Labeled Magnitude Scale (gLMS) frames affective experience in terms of the strongest imaginable liking/disliking of any kind, which can yield valid group comparisons of food palatability provided extreme hedonic experiences are unrelated to food. For each scale, 200 panelists rated affect for remembered food products (including favorite and least favorite foods) and sampled foods; they also sampled taste stimuli (quinine, sucrose, NaCl, citric acid) and rated their intensity. Finally, subjects identified experiences representing the endpoints of the hedonic gLMS. Both scales were similar in their ability to detect within-subject hedonic differences across a range of food experiences, but group comparisons favored the hedonic gLMS. With the 9-point scale, extreme labels were strongly associated with extremes in food affect. In contrast, gLMS data showed that scale extremes referenced nonfood experiences. Perceived taste intensity significantly influenced differences in food liking/disliking (for example, those experiencing the most intense tastes, called supertasters, showed more extreme liking and disliking for their favorite and least favorite foods). Scales like the hedonic gLMS are suitable for across-group comparisons of food palatability. © 2014 Institute of Food Technologists®

  5. Reviewing Some Crucial Concepts of Gibbs Energy in Chemical Equilibrium Using a Computer-Assisted, Guided-Problem-Solving Approach

    ERIC Educational Resources Information Center

    Borge, Javier

    2015-01-01

    G, G°, [delta][subscript r]G, [delta][subscript r]G°, [delta]G, and [delta]G° are essential quantities to master the chemical equilibrium. Although the number of publications devoted to explaining these items is extremely high, it seems that they do not produce the desired effect because some articles and textbooks are still being written with…

  6. Explicit integration with GPU acceleration for large kinetic networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, Benjamin; Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830; Belt, Andrew

    2015-12-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems inmore » various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  7. Software environment for implementing engineering applications on MIMD computers

    NASA Technical Reports Server (NTRS)

    Lopez, L. A.; Valimohamed, K. A.; Schiff, S.

    1990-01-01

    In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.

  8. Computer problem-solving coaches for introductory physics: Design and usability studies

    NASA Astrophysics Data System (ADS)

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-06-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.

  9. A modified priority list-based MILP method for solving large-scale unit commitment problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Xinda; Lu, Ning; Wu, Di

    This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problemmore » over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.« less

  10. THE APPLICATION OF ENGLISH-WORD MORPHOLOGY TO AUTOMATIC INDEXING AND EXTRACTING. ANNUAL SUMMARY REPORT.

    ERIC Educational Resources Information Center

    DOLBY, J.L.; AND OTHERS

    THE STUDY IS CONCERNED WITH THE LINGUISTIC PROBLEM INVOLVED IN TEXT COMPRESSION--EXTRACTING, INDEXING, AND THE AUTOMATIC CREATION OF SPECIAL-PURPOSE CITATION DICTIONARIES. IN SPITE OF EARLY SUCCESS IN USING LARGE-SCALE COMPUTERS TO AUTOMATE CERTAIN HUMAN TASKS, THESE PROBLEMS REMAIN AMONG THE MOST DIFFICULT TO SOLVE. ESSENTIALLY, THE PROBLEM IS TO…

  11. Structure preserving parallel algorithms for solving the Bethe–Salpeter eigenvalue problem

    DOE PAGES

    Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; ...

    2015-10-02

    The Bethe–Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe–Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. In this paper, we establish the equivalence between Bethe–Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe–Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm–Dancoff approximation are overestimated. In order to solve large scale problemsmore » of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Finally, several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms.« less

  12. Boundary regularized integral equation formulation of the Helmholtz equation in acoustics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y C

    2015-01-01

    A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals.

  13. Boundary regularized integral equation formulation of the Helmholtz equation in acoustics

    PubMed Central

    Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y. C.

    2015-01-01

    A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals. PMID:26064591

  14. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  15. Microscale--The Way of the Future.

    ERIC Educational Resources Information Center

    Waterman, Edward L.; Thompson, Stephen

    1989-01-01

    Small-scale chemistry employs a modern design philosophy and small, inexpensive plastic apparatus to create a learning laboratory that fosters creativity, invention, and problem solving. This article describes the characteristics of the small-scale activities. A n-solutions chemical reaction matrix is provided with examples of classroom use. (YP)

  16. Standard Model–axion–seesaw–Higgs portal inflation. Five problems of particle physics and cosmology solved in one stroke

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballesteros, Guillermo; Redondo, Javier; Ringwald, Andreas

    We present a minimal extension of the Standard Model (SM) providing a consistent picture of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today. Three right-handed neutrinos N {sub i} , a new color triplet Q and a complex SM-singlet scalar σ, whose vacuum expectation value v {sub σ} ∼ 10{sup 11} GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously, are added to the SM. At low energies, the model reduces to the SM, augmented by seesaw generated neutrino masses and mixing, plus the axion. The latter solves the strong CPmore » problem and accounts for the cold dark matter in the Universe. The inflaton is comprised by a mixture of σ and the SM Higgs, and reheating of the Universe after inflation proceeds via the Higgs portal. Baryogenesis occurs via thermal leptogenesis. Thus, five fundamental problems of particle physics and cosmology are solved at one stroke in this unified Standard Model—axion—seesaw—Higgs portal inflation (SMASH) model. It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.« less

  17. A comparison of treatment completers and non-completers of an in-patient treatment programme for male personality-disordered offenders.

    PubMed

    McMurran, Mary; Huband, Nick; Duggan, Conor

    2008-06-01

    In the treatment of offenders with personality disorders, one matter that requires attention is the rate of treatment non-completion. This is important as it has cost-efficiency and negative outcome implications. We compared the characteristics of those who participated in a personality disorder treatment programme divided into three groups: Group 1, treatment completers (N = 21); Group 2, those expelled for rule breaking (N = 16); and Group 3, those removed because they were not engaging in treatment (N = 19). We hypothesized that, compared with the other two groups, Group 2 would score higher on the impulsive/careless style scale, and that those in Group 3 would score higher on the avoidant style scale of the social problem-solving inventory-revised (SPSI-R). Further, we hypothesized that high anxiety would be associated with treatment non-completion in both the groups. These differences were not found. However, in combining both groups of non-completers for comparison, completers were shown to score significantly higher on SPSI-R rational problem solving and significantly lower on SPSI-R impulsive/careless style. Findings suggest that teaching impulsive people a rational approach to social problem solving may reduce their level of non-completion.

  18. Toward multiscale modelings of grain-fluid systems

    NASA Astrophysics Data System (ADS)

    Chareyre, Bruno; Yuan, Chao; Montella, Eduard P.; Salager, Simon

    2017-06-01

    Computationally efficient methods have been developed for simulating partially saturated granular materials in the pendular regime. In contrast, one hardly avoid expensive direct resolutions of 2-phase fluid dynamics problem for mixed pendular-funicular situations or even saturated regimes. Following previous developments for single-phase flow, a pore-network approach of the coupling problems is described. The geometry and movements of phases and interfaces are described on the basis of a tetrahedrization of the pore space, introducing elementary objects such as bridge, meniscus, pore body and pore throat, together with local rules of evolution. As firmly established local rules are still missing on some aspects (entry capillary pressure and pore-scale pressure-saturation relations, forces on the grains, or kinetics of transfers in mixed situations) a multi-scale numerical framework is introduced, enhancing the pore-network approach with the help of direct simulations. Small subsets of a granular system are extracted, in which multiphase scenario are solved using the Lattice-Boltzman method (LBM). In turns, a global problem is assembled and solved at the network scale, as illustrated by a simulated primary drainage.

  19. Development of weighting value for ecodrainage implementation assessment criteria

    NASA Astrophysics Data System (ADS)

    Andajani, S.; Hidayat, D. P. A.; Yuwono, B. E.

    2018-01-01

    This research aim to generate weighting value for each factor and find out the most influential factor for identify implementation of ecodrain concept using loading factor and Cronbach Alpha. The drainage problem especially in urban areas are getting more complex and need to be handled as soon as possible. Flood and drought problem can’t be solved by the conventional paradigm of drainage (to drain runoff flow as faster as possible to the nearest drainage area). The new paradigm of drainage that based on environmental approach called “ecodrain” can solve both of flood and drought problems. For getting the optimal result, ecodrain should be applied in smallest scale (domestic scale), until the biggest scale (city areas). It is necessary to identify drainage condition based on environmental approach. This research implement ecodrain concept by a guidelines that consist of parameters and assessment criteria. It was generating the 2 variables, 7 indicators and 63 key factors from previous research and related regulations. the conclusion of the research is the most influential indicator on technical management variable is storage system, while on non-technical management variable is government role.

  20. Multiscale global identification of porous structures

    NASA Astrophysics Data System (ADS)

    Hatłas, Marcin; Beluch, Witold

    2018-01-01

    The paper is devoted to the evolutionary identification of the material constants of porous structures based on measurements conducted on a macro scale. Numerical homogenization with the RVE concept is used to determine the equivalent properties of a macroscopically homogeneous material. Finite element method software is applied to solve the boundary-value problem in both scales. Global optimization methods in form of evolutionary algorithm are employed to solve the identification task. Modal analysis is performed to collect the data necessary for the identification. A numerical example presenting the effectiveness of proposed attitude is attached.

  1. A gradiometric version of contactless inductive flow tomography: theory and first applications

    PubMed Central

    Wondrak, Thomas; Stefani, Frank

    2016-01-01

    The contactless inductive flow tomography (CIFT) is a measurement technique that allows reconstructing the flow of electrically conducting fluids by measuring the flow-induced perturbations of one or various applied magnetic fields and solving the underlying inverse problem. One of the most promising application fields of CIFT is the continuous casting of steel, for which the online monitoring of the flow in the mould would be highly desirable. In previous experiments at a small-scale model of continuous casting, CIFT has been applied to various industrially relevant problems, including the sudden changes of flow structures in case of argon injection and the influence of a magnetic stirrer at the submerged entry nozzle. The application of CIFT in the presence of electromagnetic brakes, which are widely used to stabilize the flow in the mould, has turned out to be more challenging due to the extreme dynamic range between the strong applied brake field and the weak flow-induced perturbations of the measuring field. In this paper, we present a gradiometric version of CIFT, relying on gradiometric field measurements, that is capable to overcome those problems and which seems, therefore, a promising candidate for applying CIFT in the steel casting industry. This article is part of the themed issue ‘Supersensing through industrial process tomography’. PMID:27185963

  2. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  3. Promoting Students' Problem Solving Skills and Knowledge of STEM Concepts in a Data-Rich Learning Environment: Using Online Data as a Tool for Teaching about Renewable Energy Technologies

    NASA Astrophysics Data System (ADS)

    Thurmond, Brandi

    This study sought to compare a data-rich learning (DRL) environment that utilized online data as a tool for teaching about renewable energy technologies (RET) to a lecture-based learning environment to determine the impact of the learning environment on students' knowledge of Science, Technology, Engineering, and Math (STEM) concepts related to renewable energy technologies and students' problem solving skills. Two purposefully selected Advanced Placement (AP) Environmental Science teachers were included in the study. Each teacher taught one class about RET in a lecture-based environment (control) and another class in a DRL environment (treatment), for a total of four classes of students (n=128). This study utilized a quasi-experimental, pretest/posttest, control-group design. The initial hypothesis that the treatment group would have a significant gain in knowledge of STEM concepts related to RET and be better able to solve problems when compared to the control group was not supported by the data. Although students in the DRL environment had a significant gain in knowledge after instruction, posttest score comparisons of the control and treatment groups revealed no significant differences between the groups. Further, no significant differences were noted in students' problem solving abilities as measured by scores on a problem-based activity and self-reported abilities on a reflective questionnaire. This suggests that the DRL environment is at least as effective as the lecture-based learning environment in teaching AP Environmental Science students about RET and fostering the development of problem solving skills. As this was a small scale study, further research is needed to provide information about effectiveness of DRL environments in promoting students' knowledge of STEM concepts and problem-solving skills.

  4. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  5. Measuring Cognitive Load with Subjective Rating Scales during Problem Solving: Differences between Immediate and Delayed Ratings

    ERIC Educational Resources Information Center

    Schmeck, Annett; Opfermann, Maria; van Gog, Tamara; Paas, Fred; Leutner, Detlev

    2015-01-01

    Subjective cognitive load (CL) rating scales are widely used in educational research. However, there are still some open questions regarding the point of time at which such scales should be applied. Whereas some studies apply rating scales directly after each step or task and use an average of these ratings, others assess CL only once after the…

  6. Model Order Reduction for the fast solution of 3D Stokes problems and its application in geophysical inversion

    NASA Astrophysics Data System (ADS)

    Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro

    2017-04-01

    The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.

  7. On Instability of Geostrophic Current with Linear Vertical Shear at Length Scales of Interleaving

    NASA Astrophysics Data System (ADS)

    Kuzmina, N. P.; Skorokhodov, S. L.; Zhurbas, N. V.; Lyzhkov, D. A.

    2018-01-01

    The instability of long-wave disturbances of a geostrophic current with linear velocity shear is studied with allowance for the diffusion of buoyancy. A detailed derivation of the model problem in dimensionless variables is presented, which is used for analyzing the dynamics of disturbances in a vertically bounded layer and for describing the formation of large-scale intrusions in the Arctic basin. The problem is solved numerically based on a high-precision method developed for solving fourth-order differential equations. It is established that there is an eigenvalue in the spectrum of eigenvalues that corresponds to unstable (growing with time) disturbances, which are characterized by a phase velocity exceeding the maximum velocity of the geostrophic flow. A discussion is presented to explain some features of the instability.

  8. From Numerical Problem Solving to Model-Based Experimentation Incorporating Computer-Based Tools of Various Scales into the ChE Curriculum

    ERIC Educational Resources Information Center

    Shacham, Mordechai; Cutlip, Michael B.; Brauner, Neima

    2009-01-01

    A continuing challenge to the undergraduate chemical engineering curriculum is the time-effective incorporation and use of computer-based tools throughout the educational program. Computing skills in academia and industry require some proficiency in programming and effective use of software packages for solving 1) single-model, single-algorithm…

  9. Concept mapping improves academic performance in problem solving questions in biochemistry subject.

    PubMed

    Baig, Mukhtiar; Tariq, Saba; Rehman, Rehana; Ali, Sobia; Gazzaz, Zohair J

    2016-01-01

    To assess the effectiveness of concept mapping (CM) on the academic performance of medical students' in problem-solving as well as in declarative knowledge questions and their perception regarding CM. The present analytical and questionnaire-based study was carried out at Bahria University Medical and Dental College (BUMDC), Karachi, Pakistan. In this analytical study, students were assessed with problem-solving questions (A-type MCQs), and declarative knowledge questions (short essay questions), and 50% of the questions were from the topics learned by CM. Students also filled a 10-item, 3-point Likert scale questionnaire about their perception regarding the effectiveness of the CM approach, and two open-ended questions were also asked. There was a significant difference in the marks obtained in those problem-solving questions, which were learned by CM as compared to those topics which were taught by the traditional lectures (p<0.001), while no significant difference was observed in marks in declarative knowledge questions (p=0.704). Analysis of students' perception regarding CM showed that majority of the students perceive that CM is a helpful technique and it is enjoyed by the students. In open-ended questions, the majority of the students commented positively about the effectiveness of CM. Our results indicate that CM improves academic performance in problem solving but not in declarative knowledge questions. Students' perception about the effectiveness of CM was overwhelmingly positive.

  10. Using the DPSIR Framework to Develop a Conceptual Model: Technical Support Document

    EPA Science Inventory

    Modern problems (e.g., pollution, urban sprawl, environmental equity) are complex and often transcend spatial and temporal scales. Systems thinking is an approach to problem solving that is based on the belief that the component parts of a system are best understood in the contex...

  11. Algorithms for Mathematical Programming with Emphasis on Bi-level Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldfarb, Donald; Iyengar, Garud

    2014-05-22

    The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.

  12. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP. 1 www.gecforum.org

  13. On the scalability of the Albany/FELIX first-order Stokes approximation ice sheet solver for large-scale simulations of the Greenland and Antarctic ice sheets

    DOE PAGES

    Tezaur, Irina K.; Tuminaro, Raymond S.; Perego, Mauro; ...

    2015-01-01

    We examine the scalability of the recently developed Albany/FELIX finite-element based code for the first-order Stokes momentum balance equations for ice flow. We focus our analysis on the performance of two possible preconditioners for the iterative solution of the sparse linear systems that arise from the discretization of the governing equations: (1) a preconditioner based on the incomplete LU (ILU) factorization, and (2) a recently-developed algebraic multigrid (AMG) preconditioner, constructed using the idea of semi-coarsening. A strong scalability study on a realistic, high resolution Greenland ice sheet problem reveals that, for a given number of processor cores, the AMG preconditionermore » results in faster linear solve times but the ILU preconditioner exhibits better scalability. In addition, a weak scalability study is performed on a realistic, moderate resolution Antarctic ice sheet problem, a substantial fraction of which contains floating ice shelves, making it fundamentally different from the Greenland ice sheet problem. We show that as the problem size increases, the performance of the ILU preconditioner deteriorates whereas the AMG preconditioner maintains scalability. This is because the linear systems are extremely ill-conditioned in the presence of floating ice shelves, and the ill-conditioning has a greater negative effect on the ILU preconditioner than on the AMG preconditioner.« less

  14. Trinification, the hierarchy problem, and inverse seesaw neutrino masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cauet, Christophe; Paes, Heinrich; Wiesenfeldt, Soeren

    2011-05-01

    In minimal trinification models light neutrino masses can be generated via a radiative seesaw mechanism, where the masses of the right-handed neutrinos originate from loops involving Higgs and fermion fields at the unification scale. This mechanism is absent in models aiming at solving or ameliorating the hierarchy problem, such as low-energy supersymmetry, since the large seesaw scale disappears. In this case, neutrino masses need to be generated via a TeV-scale mechanism. In this paper, we investigate an inverse seesaw mechanism and discuss some phenomenological consequences.

  15. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  16. A variational technique for smoothing flight-test and accident data

    NASA Technical Reports Server (NTRS)

    Bach, R. E., Jr.

    1980-01-01

    The problem of determining aircraft motions along a trajectory is solved using a variational algorithm that generates unmeasured states and forcing functions, and estimates instrument bias and scale-factor errors. The problem is formulated as a nonlinear fixed-interval smoothing problem, and is solved as a sequence of linear two-point boundary value problems, using a sweep method. The algorithm has been implemented for use in flight-test and accident analysis. Aircraft motions are assumed to be governed by a six-degree-of-freedom kinematic model; forcing functions consist of body accelerations and winds, and the measurement model includes aerodynamic and radar data. Examples of the determination of aircraft motions from typical flight-test and accident data are presented.

  17. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  18. Applying Squeaky-Wheel Optimization Schedule Airborne Astronomy Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Kuerklue, Elif

    2004-01-01

    We apply the Squeaky Wheel Optimization (SWO) algorithm to the problem of scheduling astronomy observations for the Stratospheric Observatory for Infrared Astronomy, an airborne observatory. The problem contains complex constraints relating the feasibility of an astronomical observation to the position and time at which the observation begins, telescope elevation limits, special use airspace, and available fuel. Solving the problem requires making discrete choices (e.g. selection and sequencing of observations) and continuous ones (e.g. takeoff time and setting up observations by repositioning the aircraft). The problem also includes optimization criteria such as maximizing observing time while simultaneously minimizing total flight time. Previous approaches to the problem fail to scale when accounting for all constraints. We describe how to customize SWO to solve this problem, and show that it finds better flight plans, often with less computation time, than previous approaches.

  19. Process-informed extreme value statistics- Why and how?

    NASA Astrophysics Data System (ADS)

    Schumann, Andreas; Fischer, Svenja

    2017-04-01

    In many parts of the world, annual maximum series (AMS) of runoff consist of flood peaks, which differ in their genesis. There are several aspects why these differences should be considered: Often multivariate flood characteristics (volumes, shapes) are of interest. These characteristics depend on the flood types. For regionalization, the main impacts on the flood regime has to be specified. If this regime depends on different flood types, type-specific hydro-meteorological and/or watershed characteristics are relevant. The ratios between event types often change over the range of observations. If a majority of events, which belongs to certain flood type, dominates the extrapolation of a probability distribution function (pdf), it is a problem if this more frequent type would not be typical for extraordinary large extremes, determining the right tail of the pdf. To consider differences in flood origin, several problems has to be solved. The events have to be separated into different groups according to their genesis. This can be a problem for long past events where e.g. precipitation data are not available. Another problem consists in the flood type-specific statistics. If block maxima are used, the sample of floods belong to a certain type is often incomplete as other events are overlaying smaller events. Some practical useable statistical tools to solve this and other problems are presented in a case study. Seasonal models were developed which differ between winter and summer floods but also between events with long and short timescales. The pdfs of the two groups of summer floods are combined via a new mixing model. The application to German watersheds demonstrates the advantages of the new model, giving specific influence to flood types.

  20. Scalar and vector Keldysh models in the time domain

    NASA Astrophysics Data System (ADS)

    Kiselev, M. N.; Kikoin, K. A.

    2009-04-01

    The exactly solvable Keldysh model of disordered electron system in a random scattering field with extremely long correlation length is converted to the time-dependent model with extremely long relaxation. The dynamical problem is solved for the ensemble of two-level systems (TLS) with fluctuating well depths having the discrete Z 2 symmetry. It is shown also that the symmetric TLS with fluctuating barrier transparency may be described in terms of the vector Keldysh model with dime-dependent random planar rotations in xy plane having continuous SO(2) symmetry. Application of this model to description of dynamic fluctuations in quantum dots and optical lattices is discussed.

  1. Can Management Potential Be Revealed in Groups?

    ERIC Educational Resources Information Center

    Chartrand, P. J.; Jackson, D.

    1971-01-01

    Videotaping small group problem solving sessions and applying Bales Social Interaction scale can give valuable insight into areas where people (particularly managers) can profitably spend time developing themselves. (Author/EB)

  2. Implicit SPH v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungjoo; Parks, Michael L.; Perego, Mauro

    2016-11-09

    ISPH code is developed to solve multi-physics meso-scale flow problems using implicit SPH method. In particular, the code can provides solutions for incompressible, multi phase flow and electro-kinetic flows.

  3. Grand challenges for biological engineering

    PubMed Central

    Yoon, Jeong-Yeol; Riley, Mark R

    2009-01-01

    Biological engineering will play a significant role in solving many of the world's problems in medicine, agriculture, and the environment. Recently the U.S. National Academy of Engineering (NAE) released a document "Grand Challenges in Engineering," covering broad realms of human concern from sustainability, health, vulnerability and the joy of living. Biological engineers, having tools and techniques at the interface between living and non-living entities, will play a prominent role in forging a better future. The 2010 Institute of Biological Engineering (IBE) conference in Cambridge, MA, USA will address, in part, the roles of biological engineering in solving the challenges presented by the NAE. This letter presents a brief outline of how biological engineers are working to solve these large scale and integrated problems of our society. PMID:19772647

  4. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  5. Minimum-fuel turning climbout and descent guidance of transport jets

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Kreindler, E.

    1983-01-01

    The complete flightpath optimization problem for minimum fuel consumption from takeoff to landing including the initial and final turns from and to the runway heading is solved. However, only the initial and final segments which contain the turns are treated, since the straight-line climbout, cruise, and descent problems have already been solved. The paths are derived by generating fields of extremals, using the necessary conditions of optimal control together with singular arcs and state constraints. Results show that the speed profiles for straight flight and turning flight are essentially identical except for the final horizontal accelerating or decelerating turns. The optimal turns require no abrupt maneuvers, and an approximation of the optimal turns could be easily integrated with present straight-line climb-cruise-descent fuel-optimization algorithms. Climbout at the optimal IAS rather than the 250-knot terminal-area speed limit would save 36 lb of fuel for the 727-100 aircraft.

  6. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  7. One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1991-01-01

    The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.

  8. CABINS: Case-based interactive scheduler

    NASA Technical Reports Server (NTRS)

    Miyashita, Kazuo; Sycara, Katia

    1992-01-01

    In this paper we discuss the need for interactive factory schedule repair and improvement, and we identify case-based reasoning (CBR) as an appropriate methodology. Case-based reasoning is the problem solving paradigm that relies on a memory for past problem solving experiences (cases) to guide current problem solving. Cases similar to the current case are retrieved from the case memory, and similarities and differences of the current case to past cases are identified. Then a best case is selected, and its repair plan is adapted to fit the current problem description. If a repair solution fails, an explanation for the failure is stored along with the case in memory, so that the user can avoid repeating similar failures in the future. So far we have identified a number of repair strategies and tactics for factory scheduling and have implemented a part of our approach in a prototype system, called CABINS. As a future work, we are going to scale up CABINS to evaluate its usefulness in a real manufacturing environment.

  9. Mothers' problem-solving skill and use of help with infant-related issues: the role of importance and need for action.

    PubMed

    Pridham, K F; Chang, A S; Hansen, M F

    1987-08-01

    Examination was made of the relationship of mothers' appraisal of the importance of and need for action around infant-related issues to maternal experience (parity and time since birth), use of help, and perceived problem-solving competence. Sixty-two mothers (38 primiparae and 24 multiparae) kept for 90 days post-birth a daily log of issues, rated for importance and for need for action, and of help used. Mothers also reported perceived problem-solving competence on an 11-item scale. Findings indicated tentativeness in ratings of importance and action. Ratings of importance were associated with action ratings, except for temperament issues. Action ratings for baby care and illness issues decreased significantly with time. Otherwise, maternal experience had no effect on ratings. More of the variance in perceived competence than use of help was explained by action and importance ratings.

  10. Effects of traumatic brain injury on a virtual reality social problem solving task and relations to cortical thickness in adolescence.

    PubMed

    Hanten, Gerri; Cook, Lori; Orsten, Kimberley; Chapman, Sandra B; Li, Xiaoqi; Wilde, Elisabeth A; Schnelle, Kathleen P; Levin, Harvey S

    2011-02-01

    Social problem solving was assessed in 28 youth ages 12-19 years (15 with moderate to severe traumatic brain injury (TBI), 13 uninjured) using a naturalistic, computerized virtual reality (VR) version of the Interpersonal Negotiations Strategy interview (Yeates, Schultz, & Selman, 1991). In each scenario, processing load condition was varied in terms of number of characters and amount of information. Adolescents viewed animated scenarios depicting social conflict in a virtual microworld environment from an avatar's viewpoint, and were questioned on four problem solving steps: defining the problem, generating solutions, selecting solutions, and evaluating the likely outcome. Scoring was based on a developmental scale in which responses were judged as impulsive, unilateral, reciprocal, or collaborative, in order of increasing score. Adolescents with TBI were significantly impaired on the summary VR-Social Problem Solving (VR-SPS) score in Condition A (2 speakers, no irrelevant information), p=0.005; in Condition B (2 speakers+irrelevant information), p=0.035; and Condition C (4 speakers+irrelevant information), p=0.008. Effect sizes (Cohen's D) were large (A=1.40, B=0.96, C=1.23). Significant group differences were strongest and most consistent for defining the problems and evaluating outcomes. The relation of task performance to cortical thickness of specific brain regions was also explored, with significant relations found with orbitofrontal regions, the frontal pole, the cuneus, and the temporal pole. Results are discussed in the context of specific cognitive and neural mechanisms underlying social problem solving deficits after childhood TBI. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Effects of Traumatic Brain Injury on a Virtual Reality Social Problem Solving Task and Relations to Cortical Thickness in Adolescence

    PubMed Central

    Hanten, Gerri; Cook, Lori; Orsten, Kimberley; Chapman, Sandra B.; Li, Xiaoqi; Wilde, Elisabeth A.; Schnelle, Kathleen P.; Levin, Harvey S.

    2011-01-01

    Social problem solving was assessed in 28 youth ages 12–19 years (15 with moderate to severe traumatic brain injury (TBI), 13 uninjured) using a naturalistic, computerized virtual reality (VR) version of the Interpersonal Negotiations Strategy interview (Yeates, Schultz, & Selman, 1991). In each scenario, processing load condition was varied in terms of number of characters and amount of information. Adolescents viewed animated scenarios depicting social conflict in a virtual microworld environment from an avatar’s viewpoint, and were questioned on four problem solving steps: defining the problem, generating solutions, selecting solutions, and evaluating the likely outcome. Scoring was based on a developmental scale in which responses were judged as impulsive, unilateral, reciprocal, or collaborative, in order of increasing score. Adolescents with TBI were significantly impaired on the summary VR-Social Problem Solving (VR-SPS) score in Condition A (2 speakers, no irrelevant information), p = 0.005; in Condition B (2 speakers + irrelevant information), p = 0.035; and Condition C (4 speakers + irrelevant information), p = 0.008. Effect sizes (Cohen’s d) were large (A = 1.40, B = 0.96, C = 1.23). Significant group differences were strongest and most consistent for defining the problems and evaluating outcomes. The relation of task performance to cortical thickness of specific brain regions was also explored, with significant relations found with orbitofrontal regions, the frontal pole, the cuneus, and the temporal pole. Results are discussed in the context of specific cognitive and neural mechanisms underlying social problem solving deficits after childhood TBI. PMID:21147137

  12. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  13. MBR-SIFT: A mirror reflected invariant feature descriptor using a binary representation for image matching.

    PubMed

    Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping

    2017-01-01

    The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed.

  14. MBR-SIFT: A mirror reflected invariant feature descriptor using a binary representation for image matching

    PubMed Central

    Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping

    2017-01-01

    The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed. PMID:28542537

  15. Subspace projection method for unstructured searches with noisy quantum oracles using a signal-based quantum emulation device

    NASA Astrophysics Data System (ADS)

    La Cour, Brian R.; Ostrove, Corey I.

    2017-01-01

    This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.

  16. A centre-free approach for resource allocation with lower bounds

    NASA Astrophysics Data System (ADS)

    Obando, Germán; Quijano, Nicanor; Rakoto-Ravalontsalama, Naly

    2017-09-01

    Since complexity and scale of systems are continuously increasing, there is a growing interest in developing distributed algorithms that are capable to address information constraints, specially for solving optimisation and decision-making problems. In this paper, we propose a novel method to solve distributed resource allocation problems that include lower bound constraints. The optimisation process is carried out by a set of agents that use a communication network to coordinate their decisions. Convergence and optimality of the method are guaranteed under some mild assumptions related to the convexity of the problem and the connectivity of the underlying graph. Finally, we compare our approach with other techniques reported in the literature, and we present some engineering applications.

  17. Explicit integration with GPU acceleration for large kinetic networks

    DOE PAGES

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; ...

    2015-09-15

    In this study, we demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. In addition, this orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies thatmore » important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  18. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  19. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  20. Gravitational Waves From the Kerr/CFT Correspondence

    NASA Astrophysics Data System (ADS)

    Porfyriadis, Achilleas

    Astronomical observation suggests the existence of near-extreme Kerr black holes in the sky. Properties of diffeomorphisms imply that dynamics of the near-horizon region of near-extreme Kerr are governed by an infinite-dimensional conformal symmetry. This symmetry may be exploited to analytically, rather than numerically, compute a variety of potentially observable processes. In this thesis we compute the gravitational radiation emitted by a small compact object that orbits in the near-horizon region and plunges into the horizon of a large rapidly rotating black hole. We study the holographically dual processes in the context of the Kerr/CFT correspondence and find our conformal field theory (CFT) computations in perfect agreement with the gravity results. We compute the radiation emitted by a particle on the innermost stable circular orbit (ISCO) of a rapidly spinning black hole. We confirm previous estimates of the overall scaling of the power radiated, but show that there are also small oscillations all the way to extremality. Furthermore, we reveal an intricate mode-by-mode structure in the flux to infinity, with only certain modes having the dominant scaling. The scaling of each mode is controlled by its conformal weight. Massive objects in adiabatic quasi-circular inspiral towards a near-extreme Kerr black hole quickly plunge into the horizon after passing the ISCO. The post-ISCO plunge trajectory is shown to be related by a conformal map to a circular orbit. Conformal symmetry of the near-horizon region is then used to compute analytically the gravitational radiation produced during the plunge phase. Most extreme-mass-ratio-inspirals of small compact objects into supermassive black holes end with a fast plunge from an eccentric last stable orbit. We use conformal transformations to analytically solve for the radiation emitted from various fast plunges into extreme and near-extreme Kerr black holes.

  1. The effectiveness of artificial intelligent 3-D virtual reality vocational problem-solving training in enhancing employment opportunities for people with traumatic brain injury.

    PubMed

    Man, David Wai Kwong; Poon, Wai Sang; Lam, Chow

    2013-01-01

    People with traumatic brain injury (TBI) often experience cognitive deficits in attention, memory, executive functioning and problem-solving. The purpose of the present research study was to examine the effectiveness of an artificial intelligent virtual reality (VR)-based vocational problem-solving skill training programme designed to enhance employment opportunities for people with TBI. This was a prospective randomized controlled trial (RCT) comparing the effectiveness of the above programme with that of the conventional psycho-educational approach. Forty participants with mild (n = 20) or moderate (n = 20) brain injury were randomly assigned to each training programme. Comparisons of problem-solving skills were performed with the Wisconsin Card Sorting Test, the Tower of London Test and the Vocational Cognitive Rating Scale. Improvement in selective memory processes and perception of memory function were found. Across-group comparison showed that the VR group performed more favourably than the therapist-led one in terms of objective and subjective outcome measures and better vocational outcomes. These results support the potential use of a VR-based approach in memory training in people with MCI. Further VR applications, limitations and future research are described.

  2. Nanotechnology: Its Promise and Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colvin, Vicki

    2009-05-14

    Vicki Colvin of Rice University talks about how nanotechnology-enabled systems, with dimensions on the scale of a billionth of a meter, offer great promise for solving difficult social problems and creating enormous possibilities.

  3. Laboratory Scale Electrodeposition. Practice and Applications.

    ERIC Educational Resources Information Center

    Bruno, Thomas J.

    1986-01-01

    Discusses some aspects of electrodeposition and electroplating. Emphasizes the materials, techniques, and safety precautions necessary to make electrodeposition work reliably in the chemistry laboratory. Describes some problem-solving applications of this process. (TW)

  4. Quantum vacuum energy in general relativity

    NASA Astrophysics Data System (ADS)

    Henke, Christian

    2018-02-01

    The paper deals with the scale discrepancy between the observed vacuum energy in cosmology and the theoretical quantum vacuum energy (cosmological constant problem). Here, we demonstrate that Einstein's equation and an analogy to particle physics leads to the first physical justification of the so-called fine-tuning problem. This fine-tuning could be automatically satisfied with the variable cosmological term Λ (a)=Λ_0+Λ_1 a^{-(4-ɛ)}, 0 < ɛ ≪ 1, where a is the scale factor. As a side effect of our solution of the cosmological constant problem, the dynamical part of the cosmological term generates an attractive force and solves the missing mass problem of dark matter.

  5. Multicultural Mastery Scale for Youth: Multidimensional Assessment of Culturally Mediated Coping Strategies

    ERIC Educational Resources Information Center

    Fok, Carlotta Ching Ting; Allen, James; Henry, David; Mohatt, Gerald V.

    2012-01-01

    Self-mastery refers to problem-focused coping facilitated through personal agency. Communal mastery describes problem solving through an interwoven social network. This study investigates an adaptation of self- and communal mastery measures for youth. Given the important distinction between family and peers in the lives of youth, these adaptation…

  6. Attitude Bolstering Following Self-Induced Value Discrepancy.

    ERIC Educational Resources Information Center

    Sherman, Steven J.; Gorkln, Larry

    To study the effects of behaving inconsistently with a central attitude, subjects (N=77) filled out a "Contemporary Social Issues Questionnaire," and then completed a sex-role or non-sex-role logic problem. It was hypothesized that subjects who score high on a feminism scale and who fail to solve a sex-role problem, thus demonstrating…

  7. Scaling up Social: Strategies for Solving Social Work's Grand Challenges

    ERIC Educational Resources Information Center

    Rodriguez, Maria Y.; Ostrow, Laysha; Kemp, Susan P.

    2017-01-01

    The Grand Challenges for Social Work Initiative aims to focus the profession's attention on how social work can play a larger role in mitigating contemporary social problems. Yet a central issue facing contemporary social work is its seeming reticence to engage with social problems, and their solutions, beyond individual-level interventions.…

  8. Taking the Incredible Years Child and Teacher Programs to Scale in Wales

    ERIC Educational Resources Information Center

    Hutchings, Judy; Williams, Margiad Elen

    2017-01-01

    Students who demonstrate conduct problems pose ongoing challenges for teachers. Therefore, prevention programs that all families and teachers of young children can use to promote social and emotional learning, emotion regulation, and problem solving are of great interest to researchers and practitioners alike. This article describes the Incredible…

  9. Kinetic turbulence simulations at extreme scale on leadership-class systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less

  10. Enhancing public health outcomes in developing countries: from good policies and best practices to better implementation.

    PubMed

    Woolcock, Michael

    2018-06-01

    In rich and poor countries alike, a core challenge is building the state's capability for policy implementation. Delivering high-quality public health and health care-affordably, reliably and at scale, for all-exemplifies this challenge, since doing so requires deftly integrating refined technical skills (surgery), broad logistics management (supply chains, facilities maintenance), adaptive problem solving (curative care), and resolving ideological differences (who pays? who provides?), even as the prevailing health problems themselves only become more diverse, complex, and expensive as countries become more prosperous. However, the current state of state capability in developing countries is demonstrably alarming, with the strains and demands only likely to intensify in the coming decades. Prevailing "best practice" strategies for building implementation capability-copying and scaling putative successes from abroad-are too often part of the problem, while individual training ("capacity building") and technological upgrades (e.g. new management information systems) remain necessary but deeply insufficient. An alternative approach is outlined, one centered on building implementation capability by working iteratively to solve problems nominated and prioritized by local actors.

  11. Student reactions to problem-based learning in photonics technician education

    NASA Astrophysics Data System (ADS)

    Massa, Nicholas M.; Donnelly, Judith; Hanes, Fenna

    2014-07-01

    Problem-based learning (PBL) is an instructional approach in which students learn problem-solving and teamwork skills by collaboratively solving complex real-world problems. Research shows that PBL improves student knowledge and retention, motivation, problem-solving skills, and the ability to skillfully apply knowledge in new and novel situations. One of the challenges faced by students accustomed to traditional didactic methods, however, is acclimating to the PBL process in which problem parameters are often ill-defined and ambiguous, often leading to frustration and disengagement with the learning process. To address this problem, the New England Board of Higher Education (NEBHE), funded by the National Science Foundation Advanced Technological Education (NSF-ATE) program, has created and field tested a comprehensive series of industry-based multimedia PBL "Challenges" designed to scaffold the development of students' problem solving and critical thinking skills. In this paper, we present the results of a pilot study conducted to examine student reactions to the PBL Challenges in photonics technician education. During the fall 2012 semester, students (n=12) in two associate degree level photonics courses engaged in PBL using the PBL Challenges. Qualitative and quantitative methods were used to assess student motivation, self-efficacy, critical thinking, metacognitive self-regulation, and peer learning using selected scales from the Motivated Strategies for Learning Questionnaire (MSLQ). Results showed positive gains in all variables. Follow-up focus group interviews yielded positive themes supporting the effectiveness of PBL in developing the knowledge, skills and attitudes of photonics technicians.

  12. The min-conflicts heuristic: Experimental and theoretical results

    NASA Technical Reports Server (NTRS)

    Minton, Steven; Philips, Andrew B.; Johnston, Mark D.; Laird, Philip

    1991-01-01

    This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.

  13. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    PubMed Central

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  14. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    PubMed

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  15. A k-Vector Approach to Sampling, Interpolation, and Approximation

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Rogers, Jonathan

    2013-12-01

    The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.

  16. [Research progress on hydrological scaling].

    PubMed

    Liu, Jianmei; Pei, Tiefan

    2003-12-01

    With the development of hydrology and the extending effect of mankind on environment, scale issue has become a great challenge to many hydrologists due to the stochasticism and complexity of hydrological phenomena and natural catchments. More and more concern has been given to the scaling issues to gain a large-scale (or small-scale) hydrological characteristic from a certain known catchments, but hasn't been solved successfully. The first part of this paper introduced some concepts about hydrological scale, scale issue and scaling. The key problem is the spatial heterogeneity of catchments and the temporal and spatial variability of hydrological fluxes. Three approaches to scale were put forward in the third part, which were distributed modeling, fractal theory and statistical self similarity analyses. Existing problems and future research directions were proposed in the last part.

  17. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  18. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  19. Coupling molecular dynamics with lattice Boltzmann method based on the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Tan, Jifu; Sinno, Talid; Diamond, Scott

    2017-11-01

    The study of viscous fluid flow coupled with rigid or deformable solids has many applications in biological and engineering problems, e.g., blood cell transport, drug delivery, and particulate flow. We developed a partitioned approach to solve this coupled Multiphysics problem. The fluid motion was solved by Palabos (Parallel Lattice Boltzmann Solver), while the solid displacement and deformation was simulated by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). The coupling was achieved through the immersed boundary method (IBM). The code modeled both rigid and deformable solids exposed to flow. The code was validated with the classic problem of rigid ellipsoid particle orbit in shear flow, blood cell stretching test and effective blood viscosity, and demonstrated essentially linear scaling over 16 cores. An example of the fluid-solid coupling was given for flexible filaments (drug carriers) transport in a flowing blood cell suspensions, highlighting the advantages and capabilities of the developed code. NIH 1U01HL131053-01A1.

  20. The Development and Validation of Scores on the Mathematics Information Processing Scale (MIPS).

    ERIC Educational Resources Information Center

    Bessant, Kenneth C.

    1997-01-01

    This study reports on the development and psychometric properties of a new 87-item Mathematics Information Processing Scale that explores learning strategies, metacognitive problem-solving skills, and attentional deployment. Results with 340 college students support the use of the instrument, for which factor analysis identified five theoretically…

  1. Radiative Natural Supersymmetry with Mixed Axion/Higgsino Cold Dark Matter

    NASA Astrophysics Data System (ADS)

    Baer, Howard

    Models of natural supersymmetry seek to solve the little hierarchy problem by positing a spectrum of light higgsinos ≲ 200 GeV and light top squarks ≲ 500 GeV along with very heavy squarks and TeV-scale gluinos. Such models have low electroweak finetuning and are safe from LHC searches. However, in the context of the MSSM, they predict too low a value of m h and the relic density of thermally produced higgsino-like WIMPs falls well below dark matter (DM) measurements. Allowing for high scale soft SUSY breaking Higgs mass m H u > m 0 leads to natural cancellations during RG running, and to radiatively induced low finetuning at the electroweak scale. This model of radiative natural SUSY (RNS), with large mixing in the top squark sector, allows for finetuning at the 5-10 % level with TeV-scale top squarks and a 125 GeV light Higgs scalar h. If the strong CP problem is solved via the PQ mechanism, then we expect an axion-higgsino admixture of dark matter, where either or both the DM particles might be directly detected.

  2. Radiative natural supersymmetry with mixed axion/higgsino cold dark matter

    NASA Astrophysics Data System (ADS)

    Baer, Howard

    2013-05-01

    Models of natural supersymmetry seek to solve the little hierarchy problem by positing a spectrum of light higgsinos <~ 200 GeV and light top squarks <~ 500 GeV along with very heavy squarks and TeV-scale gluinos. Such models have low electroweak finetuning and are safe from LHC searches. However, in the context of the MSSM, they predict too low a value of mh and the relic density of thermally produced higgsino-like WIMPs falls well below dark matter (DM) measurements. Allowing for high scale soft SUSY breaking Higgs mass mHu > m0 leads to natural cancellations during RG running, and to radiatively induced low finetuning at the electroweak scale. This model of radiative natural SUSY (RNS), with large mixing in the top squark sector, allows for finetuning at the 5-10% level with TeV-scale top squarks and a 125 GeV light Higgs scalar h. If the strong CP problem is solved via the PQ mechanism, then we expect an axion-higgsino admixture of dark matter, where either or both the DM particles might be directly detected.

  3. Peccei-Quinn relaxion

    NASA Astrophysics Data System (ADS)

    Jeong, Kwang Sik; Shin, Chang Sub

    2018-01-01

    The relaxation mechanism, which solves the electroweak hierarchy problem without relying on TeV scale new physics, crucially depends on how a Higgs-dependent back-reaction potential is generated. In this paper, we suggest a new scenario in which the scalar potential induced by the QCD anomaly is responsible both for the relaxation mechanism and the Peccei-Quinn mechanism to solve the strong CP problem. The key idea is to introduce the relaxion and the QCD axion whose cosmic evolutions become quite different depending on an inflaton-dependent scalar potential. Our scheme raises the cutoff scale of the Higgs mass up to 107 GeV, and allows reheating temperature higher than the electroweak scale as would be required for viable cosmology. In addition, the QCD axion can account for the observed dark matter of the universe as produced by the conventional misalignment mechanism. We also consider the possibility that the couplings of the Standard Model depend on the inflaton and become stronger during inflation. In this case, the relaxation can be implemented with a sub-Planckian field excursion of the relaxion for a cutoff scale below 10 TeV.

  4. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  5. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  6. Predictors of response to Systems Training for Emotional Predictability and Problem Solving (STEPPS) for borderline personality disorder: an exploratory study.

    PubMed

    Black, D W; Allen, J; St John, D; Pfohl, B; McCormick, B; Blum, N

    2009-07-01

    Few predictors of treatment outcome or early discontinuation have been identified in persons with borderline personality disorder (BPD). The aim of the study was to examine the relationship between baseline clinical variables and treatment response and early discontinuation in a randomized controlled trial of System Training for Emotional Predictability and Problem Solving, a new cognitive group treatment. Improvement was rated using the Zanarini Rating Scale for BPD, the Clinical Global Impression Scale, the Global Assessment Scale and the Beck Depression Inventory. Subjects were assessed during the 20 week trial and a 1-year follow-up. Higher baseline severity was associated with greater improvement in global functioning and BPD-related symptoms. Higher impulsivity was predictive of early discontinuation. Optimal improvement was associated with attending > or = 15 sessions. Subjects likely to improve have the more severe BPD symptoms at baseline, while high levels of impulsivity are associated with early discontinuation.

  7. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    NASA Astrophysics Data System (ADS)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

  8. Two- and three-dimensional folding of thin film single-crystalline silicon for photovoltaic power applications

    PubMed Central

    Guo, Xiaoying; Li, Huan; Yeop Ahn, Bok; Duoss, Eric B.; Hsia, K. Jimmy; Lewis, Jennifer A.; Nuzzo, Ralph G.

    2009-01-01

    Fabrication of 3D electronic structures in the micrometer-to-millimeter range is extremely challenging due to the inherently 2D nature of most conventional wafer-based fabrication methods. Self-assembly, and the related method of self-folding of planar patterned membranes, provide a promising means to solve this problem. Here, we investigate self-assembly processes driven by wetting interactions to shape the contour of a functional, nonplanar photovoltaic (PV) device. A mechanics model based on the theory of thin plates is developed to identify the critical conditions for self-folding of different 2D geometrical shapes. This strategy is demonstrated for specifically designed millimeter-scale silicon objects, which are self-assembled into spherical, and other 3D shapes and integrated into fully functional light-trapping PV devices. The resulting 3D devices offer a promising way to efficiently harvest solar energy in thin cells using concentrator microarrays that function without active light tracking systems. PMID:19934059

  9. Two- and three-dimensional folding of thin film single-crystalline silicon for photovoltaic power applications.

    PubMed

    Guo, Xiaoying; Li, Huan; Ahn, Bok Yeop; Duoss, Eric B; Hsia, K Jimmy; Lewis, Jennifer A; Nuzzo, Ralph G

    2009-12-01

    Fabrication of 3D electronic structures in the micrometer-to-millimeter range is extremely challenging due to the inherently 2D nature of most conventional wafer-based fabrication methods. Self-assembly, and the related method of self-folding of planar patterned membranes, provide a promising means to solve this problem. Here, we investigate self-assembly processes driven by wetting interactions to shape the contour of a functional, nonplanar photovoltaic (PV) device. A mechanics model based on the theory of thin plates is developed to identify the critical conditions for self-folding of different 2D geometrical shapes. This strategy is demonstrated for specifically designed millimeter-scale silicon objects, which are self-assembled into spherical, and other 3D shapes and integrated into fully functional light-trapping PV devices. The resulting 3D devices offer a promising way to efficiently harvest solar energy in thin cells using concentrator microarrays that function without active light tracking systems.

  10. The Distribution of Cosmic-Ray Sources in the Galaxy, Gamma-Rays and the Gradient in the CO-to-H2 Relation

    NASA Technical Reports Server (NTRS)

    Strong, A. W.; Moskalenko, I. V.; Reimer, O.; Diehl, S.; Diehl, R.

    2004-01-01

    We present a solution to the apparent discrepancy between the radial gradient in the diffuse Galactic gamma-ray emissivity and the distribution of supernova remnants, believed to be the sources of cosmic rays. Recent determinations of the pulsar distribution have made the discrepancy even more apparent. The problem is shown to be plausibly solved by a variation in the Wco-to-N(H2) scaling factor. If this factor increases by a factor of 5-10 from the inner to the outer Galaxy, as expected from the Galactic metallicity gradient and supported by other evidence, we show that the source distribution required to match the radial gradient of gamma-rays can be reconciled with the distribution of supernova remnants as traced by current studies of pulsars. The resulting model fits the EGRET gamma-ray profiles extremely well in longitude, and reproduces the mid-latitude inner Galaxy intensities better than previous models.

  11. Comparison of application of various crossovers in solving inhomogeneous minimax problem modified by Goldberg model

    NASA Astrophysics Data System (ADS)

    Kobak, B. V.; Zhukovskiy, A. G.; Kuzin, A. P.

    2018-05-01

    This paper considers one of the classical NP complete problems - an inhomogeneous minimax problem. When solving such large-scale problem, there appear difficulties in obtaining an exact solution. Therefore, let us propose getting an optimum solution in an acceptable time. Among a wide range of genetic algorithm models, let us choose the modified Goldberg model, which earlier was successfully used by authors in solving NP complete problems. The classical Goldberg model uses a single-point crossover and a singlepoint mutation, which somewhat decreases the accuracy of the obtained results. In the article, let us propose using a full two-point crossover with various mutations previously researched. In addition, the work studied the necessary probability to apply it to the crossover in order to obtain results that are more accurate. Results of the computation experiment showed that the higher the probability of a crossover, the higher the quality of both the average results and the best solutions. In addition, it was found out that the higher the values of the number of individuals and the number of repetitions, the closer both the average results and the best solutions to the optimum. The paper shows how the use of a full two-point crossover increases the accuracy of solving an inhomogeneous minimax problem, while the time for getting the solution increases, but remains polynomial.

  12. Approximation of Nash equilibria and the network community structure detection problem

    PubMed Central

    2017-01-01

    Game theory based methods designed to solve the problem of community structure detection in complex networks have emerged in recent years as an alternative to classical and optimization based approaches. The Mixed Nash Extremal Optimization uses a generative relation for the characterization of Nash equilibria to identify the community structure of a network by converting the problem into a non-cooperative game. This paper proposes a method to enhance this algorithm by reducing the number of payoff function evaluations. Numerical experiments performed on synthetic and real-world networks show that this approach is efficient, with results better or just as good as other state-of-the-art methods. PMID:28467496

  13. A fast approach to designing airfoils from given pressure distribution in compressible flows

    NASA Technical Reports Server (NTRS)

    Daripa, Prabir

    1987-01-01

    A new inverse method for aerodynamic design of airfols is presented for subcritical flows. The pressure distribution in this method can be prescribed as a function of the arc length of the as-yet unknown body. This inverse problem is shown to be mathematically equivalent to solving only one nonlinear boundary value problem subject to known Dirichlet data on the boundary. The solution to this problem determines the airfoil, the freestream Mach number, and the upstream flow direction. The existence of a solution to a given pressure distribution is discussed. The method is easy to implement and extremely efficient. A series of results for which comparisons are made with the known airfoils is presented.

  14. The Relationship Between Non-Symbolic Multiplication and Division in Childhood

    PubMed Central

    McCrink, Koleen; Shafto, Patrick; Barth, Hilary

    2016-01-01

    Children without formal education in addition and subtraction are able to perform multi-step operations over an approximate number of objects. Further, their performance improves when solving approximate (but not exact) addition and subtraction problems that allow for inversion as a shortcut (e.g., a + b − b = a). The current study examines children’s ability to perform multi-step operations, and the potential for an inversion benefit, for the operations of approximate, non-symbolic multiplication and division. Children were trained to compute a multiplication and division scaling factor (*2 or /2, *4 or /4), and then tested on problems that combined two of these factors in a way that either allowed for an inversion shortcut (e.g., 8 * 4 / 4) or did not (e.g., 8 * 4 / 2). Children’s performance was significantly better than chance for all scaling factors during training, and they successfully computed the outcomes of the multi-step testing problems. They did not exhibit a performance benefit for problems with the a * b / b structure, suggesting they did not draw upon inversion reasoning as a logical shortcut to help them solve the multi-step test problems. PMID:26880261

  15. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these individual algorithms. Following this discussion, the combined parallel algorithm, known as the unified Lambert tool, is presented and an explanation is given as to how it automatically selects which of the three perturbed solvers to compute the perturbed solution for a particular orbit transfer. The unified Lambert tool may be used to determine a single orbit transfer or for generating of an extremal field map. A case study is presented for a mission that is required to rendezvous with two pieces of orbit debris (spent rocket boosters). The unified Lambert tool software developed in this dissertation is already being utilized by several industrial partners and we are confident that it will play a significant role in practical applications, including solution of Lambert problems that arise in the current applications focused on enhanced space situational awareness.

  16. Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru

    2015-05-01

    Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.

  17. Neural Networks For Demodulation Of Phase-Modulated Signals

    NASA Technical Reports Server (NTRS)

    Altes, Richard A.

    1995-01-01

    Hopfield neural networks proposed for demodulating quadrature phase-shift-keyed (QPSK) signals carrying digital information. Networks solve nonlinear integral equations prior demodulation circuits cannot solve. Consists of set of N operational amplifiers connected in parallel, with weighted feedback from output terminal of each amplifier to input terminals of other amplifiers. Used to solve signal processing problems. Implemented as analog very-large-scale integrated circuit that achieves rapid convergence. Alternatively, implemented as digital simulation of such circuit. Also used to improve phase estimation performance over that of phase-locked loop.

  18. First Constraints on Fuzzy Dark Matter from Lyman-α Forest Data and Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Iršič, Vid; Viel, Matteo; Haehnelt, Martin G.; Bolton, James S.; Becker, George D.

    2017-07-01

    We present constraints on the masses of extremely light bosons dubbed fuzzy dark matter (FDM) from Lyman-α forest data. Extremely light bosons with a de Broglie wavelength of ˜1 kpc have been suggested as dark matter candidates that may resolve some of the current small scale problems of the cold dark matter model. For the first time, we use hydrodynamical simulations to model the Lyman-α flux power spectrum in these models and compare it to the observed flux power spectrum from two different data sets: the XQ-100 and HIRES/MIKE quasar spectra samples. After marginalization over nuisance and physical parameters and with conservative assumptions for the thermal history of the intergalactic medium (IGM) that allow for jumps in the temperature of up to 5000 K, XQ-100 provides a lower limit of 7.1 ×10-22 eV , HIRES/MIKE returns a stronger limit of 14.3 ×10-22 eV , while the combination of both data sets results in a limit of 20 ×10-22 eV (2 σ C.L.). The limits for the analysis of the combined data sets increases to 37.5 ×10-22 eV (2 σ C.L.) when a smoother thermal history is assumed where the temperature of the IGM evolves as a power law in redshift. Light boson masses in the range 1 - 10 ×10-22 eV are ruled out at high significance by our analysis, casting strong doubts that FDM helps solve the "small scale crisis" of the cold dark matter models.

  19. Rasch analysis of the Italian Lower Extremity Functional Scale: insights on dimensionality and suggestions for an improved 15-item version.

    PubMed

    Bravini, Elisabetta; Giordano, Andrea; Sartorio, Francesco; Ferriero, Giorgio; Vercelli, Stefano

    2017-04-01

    To investigate dimensionality and the measurement properties of the Italian Lower Extremity Functional Scale using both classical test theory and Rasch analysis methods, and to provide insights for an improved version of the questionnaire. Rasch analysis of individual patient data. Rehabilitation centre. A total of 135 patients with musculoskeletal diseases of the lower limb. Patients were assessed with the Lower Extremity Functional Scale before and after the rehabilitation. Rasch analysis showed some problems related to rating scale category functioning, items fit, and items redundancy. After an iterative process, which resulted in the reduction of rating scale categories from 5 to 4, and in the deletion of 5 items, the psychometric properties of the Italian Lower Extremity Functional Scale improved. The retained 15 items with a 4-level response format fitted the Rasch model (internal construct validity), and demonstrated unidimensionality and good reliability indices (person-separation reliability 0.92; Cronbach's alpha 0.94). Then, the analysis showed differential item functioning for six of the retained items. The sensitivity to change of the Italian 15-item Lower Extremity Functional Scale was nearly equal to the one of the original version (effect size: 0.93 and 0.98; standardized response mean: 1.20 and 1.28, respectively for the 15-item and 20-item versions). The Italian Lower Extremity Functional Scale had unsatisfactory measurement properties. However, removing five items and simplifying the scoring from 5 to 4 levels resulted in a more valid measure with good reliability and sensitivity to change.

  20. Solving lot-sizing problem with quantity discount and transportation cost

    NASA Astrophysics Data System (ADS)

    Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei

    2013-04-01

    Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.

  1. Numerical solution of the electron transport equation

    NASA Astrophysics Data System (ADS)

    Woods, Mark

    The electron transport equation has been solved many times for a variety of reasons. The main difficulty in its numerical solution is that it is a very stiff boundary value problem. The most common numerical methods for solving boundary value problems are symmetric collocation methods and shooting methods. Both of these types of methods can only be applied to the electron transport equation if the boundary conditions are altered with unrealistic assumptions because they require too many points to be practical. Further, they result in oscillating and negative solutions, which are physically meaningless for the problem at hand. For these reasons, all numerical methods for this problem to date are a bit unusual because they were designed to try and avoid the problem of extreme stiffness. This dissertation shows that there is no need to introduce spurious boundary conditions or invent other numerical methods for the electron transport equation. Rather, there already exists methods for very stiff boundary value problems within the numerical analysis literature. We demonstrate one such method in which the fast and slow modes of the boundary value problem are essentially decoupled. This allows for an upwind finite difference method to be applied to each mode as is appropriate. This greatly reduces the number of points needed in the mesh, and we demonstrate how this eliminates the need to define new boundary conditions. This method is verified by showing that under certain restrictive assumptions, the electron transport equation has an exact solution that can be written as an integral. We show that the solution from the upwind method agrees with the quadrature evaluation of the exact solution. This serves to verify that the upwind method is properly solving the electron transport equation. Further, it is demonstrated that the output of the upwind method can be used to compute auroral light emissions.

  2. Unified Lambert Tool for Massively Parallel Applications in Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie; Hernandez, Kevin; Probe, Austin; Junkins, John L.

    2018-03-01

    This paper introduces a parallel-compiled tool that combines several of our recently developed methods for solving the perturbed Lambert problem using modified Chebyshev-Picard iteration. This tool (unified Lambert tool) consists of four individual algorithms, each of which is unique and better suited for solving a particular type of orbit transfer. The first is a Keplerian Lambert solver, which is used to provide a good initial guess (warm start) for solving the perturbed problem. It is also used to determine the appropriate algorithm to call for solving the perturbed problem. The arc length or true anomaly angle spanned by the transfer trajectory is the parameter that governs the automated selection of the appropriate perturbed algorithm, and is based on the respective algorithm convergence characteristics. The second algorithm solves the perturbed Lambert problem using the modified Chebyshev-Picard iteration two-point boundary value solver. This algorithm does not require a Newton-like shooting method and is the most efficient of the perturbed solvers presented herein, however the domain of convergence is limited to about a third of an orbit and is dependent on eccentricity. The third algorithm extends the domain of convergence of the modified Chebyshev-Picard iteration two-point boundary value solver to about 90% of an orbit, through regularization with the Kustaanheimo-Stiefel transformation. This is the second most efficient of the perturbed set of algorithms. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver for solving multiple revolution perturbed transfers. This method does require "shooting" but differs from Newton-like shooting methods in that it does not require propagation of a state transition matrix. The unified Lambert tool makes use of the General Mission Analysis Tool and we use it to compute thousands of perturbed Lambert trajectories in parallel on the Space Situational Awareness computer cluster at the LASR Lab, Texas A&M University. We demonstrate the power of our tool by solving a highly parallel example problem, that is the generation of extremal field maps for optimal spacecraft rendezvous (and eventual orbit debris removal). In addition we demonstrate the need for including perturbative effects in simulations for satellite tracking or data association. The unified Lambert tool is ideal for but not limited to space situational awareness applications.

  3. Solving Navier-Stokes equations on a massively parallel processor; The 1 GFLOP performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saati, A.; Biringen, S.; Farhat, C.

    This paper reports on experience in solving large-scale fluid dynamics problems on the Connection Machine model CM-2. The authors have implemented a parallel version of the MacCormack scheme for the solution of the Navier-Stokes equations. By using triad floating point operations and reducing the number of interprocessor communications, they have achieved a sustained performance rate of 1.42 GFLOPS.

  4. Application of Differential Evolutionary Optimization Methodology for Parameter Structure Identification in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Nishikawa, T.

    2013-12-01

    With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.

  5. Specificity of Problem-Solving Skills Training in Mothers of Children Newly Diagnosed With Cancer: Results of a Multisite Randomized Clinical Trial

    PubMed Central

    Sahler, Olle Jane Z.; Dolgin, Michael J.; Phipps, Sean; Fairclough, Diane L.; Askins, Martha A.; Katz, Ernest R.; Noll, Robert B.; Butler, Robert W.

    2013-01-01

    Purpose Diagnosis of cancer in a child can be extremely stressful for parents. Bright IDEAS, a problem-solving skills training (PSST) intervention, has been shown to decrease negative affectivity (anxiety, depression, post-traumatic stress symptoms) in mothers of newly diagnosed patients. This study was designed to determine the specificity of PSST by examining its direct and indirect (eg, social support) effects compared with a nondirective support (NDS) intervention. Patients and Methods This randomized clinical trial included 309 English- or Spanish-speaking mothers of children diagnosed 2 to 16 weeks before recruitment. Participants completed assessments prerandomization (T1), immediately postintervention (T2), and at 3-month follow-up (T3). Both PSST and NDS consisted of eight weekly 1-hour individual sessions. Outcomes included measures of problem-solving skill and negative affectivity. Results There were no significant between-group differences at baseline (T1). Except for level of problem-solving skill, which was directly taught in the PSST arm, outcome measures improved equally in both groups immediately postintervention (T2). However, at the 3-month follow-up (T3), mothers in the PSST group continued to show significant improvements in mood, anxiety, and post-traumatic stress; mothers in the NDS group showed no further significant gains. Conclusion PSST is an effective and specific intervention whose beneficial effects continue to grow after the intervention ends. In contrast, NDS is an effective intervention while it is being administered, but its benefits plateau when active support is removed. Therefore, teaching coping skills at diagnosis has the potential to facilitate family resilience over the entire course of treatment. PMID:23358975

  6. Parallel-vector solution of large-scale structural analysis problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1989-01-01

    A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.

  7. SCOUT: simultaneous time segmentation and community detection in dynamic networks

    PubMed Central

    Hulovatyy, Yuriy; Milenković, Tijana

    2016-01-01

    Many evolving complex real-world systems can be modeled via dynamic networks. An important problem in dynamic network research is community detection, which finds groups of topologically related nodes. Typically, this problem is approached by assuming either that each time point has a distinct community organization or that all time points share a single community organization. The reality likely lies between these two extremes. To find the compromise, we consider community detection in the context of the problem of segment detection, which identifies contiguous time periods with consistent network structure. Consequently, we formulate a combined problem of segment community detection (SCD), which simultaneously partitions the network into contiguous time segments with consistent community organization and finds this community organization for each segment. To solve SCD, we introduce SCOUT, an optimization framework that explicitly considers both segmentation quality and partition quality. SCOUT addresses limitations of existing methods that can be adapted to solve SCD, which consider only one of segmentation quality or partition quality. In a thorough evaluation, SCOUT outperforms the existing methods in terms of both accuracy and computational complexity. We apply SCOUT to biological network data to study human aging. PMID:27881879

  8. Introduction to bioinformatics.

    PubMed

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  9. Measuring Twenty-First Century Skills: Development and Validation of a Scale for In-Service and Pre-Service Teachers

    ERIC Educational Resources Information Center

    Jia, Yueming; Oh, Youn Joo; Sibuma, Bernadette; LaBanca, Frank; Lorentson, Mhora

    2016-01-01

    A self-report scale that measures teachers' confidence in teaching students about twenty-first century skills was developed and validated with pre-service and in-service teachers. First, 16 items were created to measure teaching confidence in six areas: information literacy, collaboration, communication, innovation and creativity, problem solving,…

  10. Score Calculation in Informatics Contests Using Multiple Criteria Decision Methods

    ERIC Educational Resources Information Center

    Skupiene, Jurate

    2011-01-01

    The Lithuanian Informatics Olympiad is a problem solving contest for high school students. The work of each contestant is evaluated in terms of several criteria, where each criterion is measured according to its own scale (but the same scale for each contestant). Several jury members are involved in the evaluation. This paper analyses the problem…

  11. Multi-Item Direct Behavior Ratings: Dependability of Two Levels of Assessment Specificity

    ERIC Educational Resources Information Center

    Volpe, Robert J.; Briesch, Amy M.

    2015-01-01

    Direct Behavior Rating-Multi-Item Scales (DBR-MIS) have been developed as formative measures of behavioral assessment for use in school-based problem-solving models. Initial research has examined the dependability of composite scores generated by summing all items comprising the scales. However, it has been argued that DBR-MIS may offer assessment…

  12. The Development of a Sport-Based Life Skills Scale for Youth to Young Adults, 11-23 Years of Age

    ERIC Educational Resources Information Center

    Cauthen, Hillary Ayn

    2013-01-01

    The purpose of this study was to develop a sport-based life skills scale that assesses 20 life skills: goal setting, time management, communication, coping, problem solving, leadership, critical thinking, teamwork, self-discipline, decision making, planning, organizing, resiliency, motivation, emotional control, patience, assertiveness, empathy,…

  13. Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.

    PubMed

    Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D

    2017-11-01

    We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

  14. An interior-point method-based solver for simulation of aircraft parts riveting

    NASA Astrophysics Data System (ADS)

    Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael

    2018-05-01

    The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.

  15. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Technical Reports Server (NTRS)

    Goldfinger, A. D.; Beal, R. C.; Tilley, D. G.

    1981-01-01

    The Seasat Synthetic Aperture Radar (SAR) has proved to be an instrument of great utility in the sensing of ocean conditions on a global scale. An analysis of oceanographic and atmospheric aspects of Seasat data has shown that the features observed in the imagery are linked to ocean phenomena such as storm sources and their resulting swell systems. However, there remains one central problem which has not been satisfactorily solved to date. This problem is related to the accurate measurement of wind-generated ocean wave spectra. Investigations addressing this problem are currently being conducted. The problem has two parts, including the accurate measurement of the image spectra and the inference of actual surface wave spectra from these measurements. A description is presented of the progress made towards solving the first part of the problem, taking into account a digital rather than optical computation of the image transforms.

  16. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    NASA Astrophysics Data System (ADS)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  17. Fabrication of electron beam deposited tip for atomic-scale atomic force microscopy in liquid.

    PubMed

    Miyazawa, K; Izumi, H; Watanabe-Nakayama, T; Asakawa, H; Fukuma, T

    2015-03-13

    Recently, possibilities of improving operation speed and force sensitivity in atomic-scale atomic force microscopy (AFM) in liquid using a small cantilever with an electron beam deposited (EBD) tip have been intensively explored. However, the structure and properties of an EBD tip suitable for such an application have not been well-understood and hence its fabrication process has not been established. In this study, we perform atomic-scale AFM measurements with a small cantilever and clarify two major problems: contaminations from a cantilever and tip surface, and insufficient mechanical strength of an EBD tip having a high aspect ratio. To solve these problems, here we propose a fabrication process of an EBD tip, where we attach a 2 μm silica bead at the cantilever end and fabricate a 500-700 nm EBD tip on the bead. The bead height ensures sufficient cantilever-sample distance and enables to suppress long-range interaction between them even with a short EBD tip having high mechanical strength. After the tip fabrication, we coat the whole cantilever and tip surface with Si (30 nm) to prevent the generation of contamination. We perform atomic-scale AFM imaging and hydration force measurements at a mica-water interface using the fabricated tip and demonstrate its applicability to such an atomic-scale application. With a repeated use of the proposed process, we can reuse a small cantilever for atomic-scale measurements for several times. Therefore, the proposed method solves the two major problems and enables the practical use of a small cantilever in atomic-scale studies on various solid-liquid interfacial phenomena.

  18. A Large-scale Distributed Indexed Learning Framework for Data that Cannot Fit into Memory

    DTIC Science & Technology

    2015-03-27

    learn a classifier. Integrating three learning techniques (online, semi-supervised and active learning ) together with a selective sampling with minimum communication between the server and the clients solved this problem.

  19. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  20. Crowdsourced 'R&D' and medical research.

    PubMed

    Callaghan, Christian William

    2015-09-01

    Crowdsourced R&D, a research methodology increasingly applied to medical research, has properties well suited to large-scale medical data collection and analysis, as well as enabling rapid research responses to crises such as disease outbreaks. Multidisciplinary literature offers diverse perspectives of crowdsourced R&D as a useful large-scale medical data collection and research problem-solving methodology. Crowdsourced R&D has demonstrated 'proof of concept' in a host of different biomedical research applications. A wide range of quality and ethical issues relate to crowdsourced R&D. The rapid growth in applications of crowdsourced R&D in medical research is predicted by an increasing body of multidisciplinary theory. Further research in areas such as artificial intelligence may allow better coordination and management of the high volumes of medical data and problem-solving inputs generated by the crowdsourced R&D process. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  2. Off-policy reinforcement learning for H∞ control design.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen

    2015-01-01

    The H∞ control design problem is considered for nonlinear systems with unknown internal system model. It is known that the nonlinear H∞ control problem can be transformed into solving the so-called Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial differential equation that is generally impossible to be solved analytically. Even worse, model-based approaches cannot be used for approximately solving HJI equation, when the accurate system model is unavailable or costly to obtain in practice. To overcome these difficulties, an off-policy reinforcement leaning (RL) method is introduced to learn the solution of HJI equation from real system data instead of mathematical system model, and its convergence is proved. In the off-policy RL method, the system data can be generated with arbitrary policies rather than the evaluating policy, which is extremely important and promising for practical systems. For implementation purpose, a neural network (NN)-based actor-critic structure is employed and a least-square NN weight update algorithm is derived based on the method of weighted residuals. Finally, the developed NN-based off-policy RL method is tested on a linear F16 aircraft plant, and further applied to a rotational/translational actuator system.

  3. Managing stress: the influence of gender, age and emotion regulation on coping among university students in Botswana

    PubMed Central

    Monteiro, Nicole M.; Balogun, Shyngle K.; Oratile, Kutlo N.

    2014-01-01

    This study focused on the influence of gender, age and emotion regulation on coping strategies among university students in Botswana. Sixty-four males and 64 females, ranging in age from 18 to 29 years completed the Difficulty in Emotion Regulation Scale and the Coping Strategy Inventory. Female students used wishful thinking and problem-focused disengagement more than male students; however, there were no other significant gender differences in coping strategies. Older students were more likely to use problem-solving, cognitive restructuring and express emotion coping strategies. In addition, problems in emotion regulation significantly predicted problem-and emotion-focused engagement, problem- and emotion-focused disengagement and coping strategies. There was a unique finding that non-acceptance of emotional responses, a type of emotion suppression, was positively correlated with problem solving, cognitive restructuring, expressing emotion, social support, problem avoidance and wishful thinking coping strategies. Cultural context and implications for student well-being and university support are discussed. PMID:24910491

  4. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  5. In search of the 'Aha!' experience: Elucidating the emotionality of insight problem-solving.

    PubMed

    Shen, Wangbing; Yuan, Yuan; Liu, Chang; Luo, Jing

    2016-05-01

    Although the experience of insight has long been noted, the essence of the 'Aha!' experience, reflecting a sudden change in the brain that accompanies an insight solution, remains largely unknown. This work aimed to uncover the mystery of the 'Aha!' experience through three studies. In Study 1, participants were required to solve a set of verbal insight problems and then subjectively report their affective experience when solving the problem. The participants were found to have experienced many types of emotions, with happiness the most frequently reported one. Multidimensional scaling was employed in Study 2 to simplify the dimensions of these reported emotions. The results showed that these different types of emotions could be clearly placed in two-dimensional space and that components constituting the 'Aha!' experience mainly reflected positive emotion and approached cognition. To validate previous findings, in Study 3, participants were asked to select the most appropriate emotional item describing their feelings at the time the problem was solved. The results of this study replicated the multidimensional construct consisting of approached cognition and positive affect. These three studies provide the first direct evidence of the essence of the 'Aha!' The potential significance of the findings was discussed. © 2015 The British Psychological Society.

  6. Coping responses in the midst of terror: the July 22 terror attack at Utøya Island in Norway.

    PubMed

    Jensen, Tine K; Thoresen, Siri; Dyb, Grete

    2015-02-01

    This study examined the peri-trauma coping responses of 325 survivors, mostly youth, after the July 22, 2011 terror attack on Utøya Island in Norway. The aim was to understand peri-trauma coping responses and their relation to subsequent post-traumatic stress (PTS) reactions. Respondents were interviewed face-to-face 4-5 months after the shooting, and most were interviewed at their homes. Peri-trauma coping was assessed using ten selected items from the "How I Cope Under Pressure Scale" (HICUPS), covering the dimensions of problem solving, positive cognitive restructuring, avoidance, support seeking, seeking understanding, and religious coping. PTS reactions were assessed with the UCLA PTSD Reaction Index. The participants reported using a wide variety of coping strategies. Problem solving, positive cognitive restructuring, and seeking understanding strategies were reported most often. Men reported using more problem-solving strategies, whereas women reported more emotion-focused strategies. There were no significant associations between age and the use of coping strategies. Problem solving and positive cognitive restructuring were significantly associated with fewer PTS reactions. The results are discussed in light of previous research and may help to inform early intervention efforts for survivors of traumatic events. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  7. The influence of eating psychopathology on autobiographical memory specificity and social problem-solving.

    PubMed

    Ridout, Nathan; Matharu, Munveen; Sanders, Elizabeth; Wallis, Deborah J

    2015-08-30

    The primary aim was to examine the influence of subclinical disordered eating on autobiographical memory specificity (AMS) and social problem solving (SPS). A further aim was to establish if AMS mediated the relationship between eating psychopathology and SPS. A non-clinical sample of 52 females completed the autobiographical memory test (AMT), where they were asked to retrieve specific memories of events from their past in response to cue words, and the means-end problem-solving task (MEPS), where they were asked to generate means of solving a series of social problems. Participants also completed the Eating Disorders Inventory (EDI) and Hospital Anxiety and Depression Scale. After controlling for mood, high scores on the EDI subscales, particularly Drive-for-Thinness, were associated with the retrieval of fewer specific and a greater proportion of categorical memories on the AMT and with the generation of fewer and less effective means on the MEPS. Memory specificity fully mediated the relationship between eating psychopathology and SPS. These findings have implications for individuals exhibiting high levels of disordered eating, as poor AMS and SPS are likely to impact negatively on their psychological wellbeing and everyday social functioning and could represent a risk factor for the development of clinically significant eating disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  9. Eating attitude in the obese patients: the evaluation in terms of relational factors.

    PubMed

    Keskin, G; Engin, E; Dulgerler, S

    2010-12-01

    • Obesity has become an important health problem because of the gradually increasing incidence seen within all age groups. People with obesity problems are affected lifespan and health negatively. • Obesity can be described as disease that affects lifespan and health negatively, because of body fat deposition. • The eating attitudes, body perception, strategies for coping with stress in patient being treated for obesity and investigated the relationship between their eating attitudes and socio-demographic characteristics, body perceptions and strategies of coping with stress. • Misperception of the body and the ability to solve the problem increased as eating attitude defects increased. A positive correlation was between the eating attitude defects and habitude of pursing social support and ability of coping. Obesity, a complex disease, involves many psychological problems besides eating disorders. In this study, we aimed to examine the relationship between the eating attitude and body perception, which is thought to affect the eating attitude in the patients diagnosed as obese, the ability to solve the problem, the strategy of coping with stress and some socio-demographic features. A total of 99 adults aged between 20 and 68 years, who were examined in the Polyclinic of Endocrinology and Metabolism Diseases, Ege University, Türkiye, constituted the sample of the study. Eating Attitude Test, The Body Perception Scale and The Scale of Coping with Strategies were used in order to collect the data. Misperception of the body and the ability to solve the problem increased as eating attitude defects increased. A positive correlation was determined between the eating attitude defects and the habitude of pursuing social support and the ability of coping. © 2010 Blackwell Publishing.

  10. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  11. Using the crowd as an innovation partner.

    PubMed

    Boudreau, Kevin J; Lakhani, Karim R

    2013-04-01

    From Apple to Merck to Wikipedia, more and more organizations are turning to crowds for help in solving their most vexing innovation and research questions, but managers remain understandably cautious. It seems risky and even unnatural to push problems out to vast groups of strangers distributed around the world, particularly for companies built on a history of internal innovation. How can intellectual property be protected? How can a crowd-sourced solution be integrated into corporate operations? What about the costs? These concerns are all reasonable, the authors write, but excluding crowdsourcing from the corporate innovation tool kit means losing an opportunity. After a decade of study, they have identified when crowds tend to outperform internal organizations (or not). They outline four ways to tap into crowd-powered problem solving--contests, collaborative communities, complementors, and labor markets--and offer a system for picking the best one in a given situation. Contests, for example, are suited to highly challenging technical, analytical, and scientific problems; design problems; and creative or aesthetic projects. They are akin to running a series of independent experiments that generate multiple solutions--and if those solutions cluster at some extreme, a company can gain insight into where a problem's "technical frontier" lies. (Internal R&D may generate far less information.)

  12. Homogeneous cosmological models and new inflation

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.; Widrow, Lawrence M.

    1986-01-01

    The promise of the inflationary-universe scenario is to free the present state of the universe from extreme dependence upon initial data. Paradoxically, inflation is usually analyzed in the context of the homogeneous and isotropic Robertson-Walker cosmological models. It is shown that all but a small subset of the homogeneous models undergo inflation. Any initial anisotropy is so strongly damped that if sufficient inflation occurs to solve the flatness and horizon problems, the universe today would still be very isotropic.

  13. Management Information Systems Design Implications: The Effect of Cognitive Style and Information Presentation on Problem Solving.

    DTIC Science & Technology

    1987-12-01

    my thesis advisor, Dr Dennis E Campbell. Without his expert advice and extreme patience with an INTP like myself, this research would not have been...research was to identify a relationship between psychological type and mode of presentation of information. The * type theory developed ty Carl Jung and...preference rankings for seven differewnt modes of presentation of data. The statistical analyses showed no relationship betveen personality type and

  14. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)

    1992-01-01

    High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.

  15. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)

    1995-01-01

    High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.

  16. Functional reasoning in diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Sticklen, Jon; Bond, W. E.; Stclair, D. C.

    1988-01-01

    This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.

  17. Efforts to Handle Waste through Science, Environment, Technology and Society (SETS)

    NASA Astrophysics Data System (ADS)

    Rahmawati, D.; Rahman, T.; Amprasto, A.

    2017-09-01

    This research to identify the attempt to deal with the waste through a learning SETS to facilitate troubleshooting and environmentally conscious high school students. The research method is weak experiment, with the design of the study “The One-group pretest-Posttest Design”. The population used in this study is an entire senior high school class in Ciamis Regency of Indonesia many as 10 classes totaling 360 students. The sample used in this study were 1 class. Data collected through pretest and posttest to increase problem-solving skills and environmental awareness of students. Instruments used in this research is to test the ability to solve the problem on the concept of Pollution and Environmental Protection, in the form of essays by 15 matter, the attitude scale questionnaire of 28 statements. The analysis N-gain average showed that the SETS problem-solving skills and environmental awareness of students in the medium category. In addition, students’ creativity in finding out pretty good waste management by creating products that are aesthetically valuable and economic appropriately.

  18. Teaching the tacit knowledge of programming to noviceswith natural language tutoring

    NASA Astrophysics Data System (ADS)

    Lane, H. Chad; Vanlehn, Kurt

    2005-09-01

    For beginning programmers, inadequate problem solving and planning skills are among the most salient of their weaknesses. In this paper, we test the efficacy of natural language tutoring to teach and scaffold acquisition of these skills. We describe ProPL (Pro-PELL), a dialogue-based intelligent tutoring system that elicits goal decompositions and program plans from students in natural language. The system uses a variety of tutoring tactics that leverage students' intuitive understandings of the problem, how it might be solved, and the underlying concepts of programming. We report the results of a small-scale evaluation comparing students who used ProPL with a control group who read the same content. Our primary findings are that students who received tutoring from ProPL seem to have developed an improved ability to solve the composition problem and displayed behaviors that suggest they were able to think at greater levels of abstraction than students in the read-only group.

  19. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  20. A variational approach to probing extreme events in turbulent dynamical systems

    PubMed Central

    Farazmand, Mohammad; Sapsis, Themistoklis P.

    2017-01-01

    Extreme events are ubiquitous in a wide range of dynamical systems, including turbulent fluid flows, nonlinear waves, large-scale networks, and biological systems. We propose a variational framework for probing conditions that trigger intermittent extreme events in high-dimensional nonlinear dynamical systems. We seek the triggers as the probabilistically feasible solutions of an appropriately constrained optimization problem, where the function to be maximized is a system observable exhibiting intermittent extreme bursts. The constraints are imposed to ensure the physical admissibility of the optimal solutions, that is, significant probability for their occurrence under the natural flow of the dynamical system. We apply the method to a body-forced incompressible Navier-Stokes equation, known as the Kolmogorov flow. We find that the intermittent bursts of the energy dissipation are independent of the external forcing and are instead caused by the spontaneous transfer of energy from large scales to the mean flow via nonlinear triad interactions. The global maximizer of the corresponding variational problem identifies the responsible triad, hence providing a precursor for the occurrence of extreme dissipation events. Specifically, monitoring the energy transfers within this triad allows us to develop a data-driven short-term predictor for the intermittent bursts of energy dissipation. We assess the performance of this predictor through direct numerical simulations. PMID:28948226

  1. Implementing and Bounding a Cascade Heuristic for Large-Scale Optimization

    DTIC Science & Technology

    2017-06-01

    solving the monolith, we develop a method for producing lower bounds to the optimal objective function value. To do this, we solve a new integer...as developing and analyzing methods for producing lower bounds to the optimal objective function value of the seminal problem monolith, which this...length of the window decreases, the end effects of the model typically increase (Zerr, 2016). There are four primary methods for correcting end

  2. Near optimal pentamodes as a tool for guiding stress while minimizing compliance in 3d-printed materials: A complete solution to the weak G-closure problem for 3d-printed materials

    NASA Astrophysics Data System (ADS)

    Milton, Graeme W.; Camar-Eddine, Mohamed

    2018-05-01

    For a composite containing one isotropic elastic material, with positive Lame moduli, and void, with the elastic material occupying a prescribed volume fraction f, and with the composite being subject to an average stress, σ0 , Gibiansky, Cherkaev, and Allaire provided a sharp lower bound Wf(σ0) on the minimum compliance energy σ0 :ɛ0 , in which ɛ0 is the average strain. Here we show these bounds also provide sharp bounds on the possible (σ0 ,ɛ0) -pairs that can coexist in such composites, and thus solve the weak G-closure problem for 3d-printed materials. The materials we use to achieve the extremal (σ0 ,ɛ0) -pairs are denoted as near optimal pentamodes. We also consider two-phase composites containing this isotropic elasticity material and a rigid phase with the elastic material occupying a prescribed volume fraction f, and with the composite being subject to an average strain, ɛ0. For such composites, Allaire and Kohn provided a sharp lower bound W˜f(ɛ0) on the minimum elastic energy σ0 :ɛ0 . We show that these bounds also provide sharp bounds on the possible (σ0 ,ɛ0) -pairs that can coexist in such composites of the elastic and rigid phases, and thus solve the weak G-closure problem in this case too. The materials we use to achieve these extremal (σ0 ,ɛ0) -pairs are denoted as near optimal unimodes.

  3. Towards Highly Scalable Ab Initio Molecular Dynamics (AIMD) Simulations on the Intel Knights Landing Manycore Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacquelin, Mathias; De Jong, Wibe A.; Bylaska, Eric J.

    2017-07-03

    The Ab Initio Molecular Dynamics (AIMD) method allows scientists to treat the dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. This extremely important method has tremendous computational requirements, because the electronic Schr¨odinger equation, approximated using Kohn-Sham Density Functional Theory (DFT), is solved at every time step. With the advent of manycore architectures, application developers have a significant amount of processing power within each compute node that can only be exploited through massive parallelism. A compute intensive application such as AIMD forms a good candidate to leverage this processing power. In this paper, wemore » focus on adding thread level parallelism to the plane wave DFT methodology implemented in NWChem. Through a careful optimization of tall-skinny matrix products, which are at the heart of the Lagrange multiplier and nonlocal pseudopotential kernels, as well as 3D FFTs, our OpenMP implementation delivers excellent strong scaling on the latest Intel Knights Landing (KNL) processor. We assess the efficiency of our Lagrange multiplier kernels by building a Roofline model of the platform, and verify that our implementation is close to the roofline for various problem sizes. Finally, we present strong scaling results on the complete AIMD simulation for a 64 water molecules test case, that scales up to all 68 cores of the Knights Landing processor.« less

  4. Effects of problem-solving interventions on aggressive behaviours among primary school pupils in Ibadan, Nigeria.

    PubMed

    Abdulmalik, Jibril; Ani, Cornelius; Ajuwon, Ademola J; Omigbodun, Olayinka

    2016-01-01

    Aggressive patterns of behavior often start early in childhood, and tend to remain stable into adulthood. The negative consequences include poor academic performance, disciplinary problems and encounters with the juvenile justice system. Early school intervention programs can alter this trajectory for aggressive children. However, there are no studies evaluating the feasibility of such interventions in Africa. This study therefore, assessed the effect of group-based problem-solving interventions on aggressive behaviors among primary school pupils in Ibadan, Nigeria. This was an intervention study with treatment and wait-list control groups. Two public primary schools in Ibadan Nigeria were randomly allocated to an intervention group and a waiting list control group. Teachers rated male Primary five pupils in the two schools on aggressive behaviors and the top 20 highest scorers in each school were selected. Pupils in the intervention school received 6 twice-weekly sessions of group-based intervention, which included problem-solving skills, calming techniques and attribution retraining. Outcome measures were; teacher rated aggressive behaviour (TRAB), self-rated aggression scale (SRAS), strengths and difficulties questionnaire (SDQ), attitude towards aggression questionnaire (ATAQ), and social cognition and attribution scale (SCAS). The participants were aged 12 years (SD = 1.2, range 9-14 years). Both groups had similar socio-demographic backgrounds and baseline measures of aggressive behaviors. Controlling for baseline scores, the intervention group had significantly lower scores on TRAB and SRAS 1-week post intervention with large Cohen's effect sizes of 1.2 and 0.9 respectively. The other outcome measures were not significantly different between the groups post-intervention. Group-based problem solving intervention for aggressive behaviors among primary school students showed significant reductions in both teachers' and students' rated aggressive behaviours with large effect sizes. However, this was a small exploratory trial whose findings may not be generalizable, but it demonstrates that psychological interventions for children with high levels of aggressive behaviour are feasible and potentially effective in Nigeria.

  5. GEE-WIS Anchored Problem Solving Using Real-Time Authentic Water Quality Data

    NASA Astrophysics Data System (ADS)

    Young, M.; Wlodarczyk, M. S.; Branco, B.; Torgersen, T.

    2002-05-01

    GEE-WIS scientific problem solving consists of observing, hypothesizing, synthesis, argument building and reasoning, in the context of analysis, representation, modeling and sense-making of real-time authentic water quality data. Geoscience Environmental Education - Web-accessible Instrumented Systems, or GEE-WIS, an NSF Geoscience Education grant, has established a set of companion websites that stream real-time data from two campus retention ponds for research and use in secondary and undergraduate water quality lessons. We have targeted scientific problem solving skills because of the nature of the GEE-WIS environment, but further because they are central to state and federal efforts to establish science education curriculum standards and are at the core of performance-based testing. We have used a design experiment process to create and test two Anchored Instruction scenario problems. Customization such as that done through a design process, is acknowledged to be a fundamental component of educational research from an ecological psychology perspective. Our efforts have shared core design elements with other NSF water quality projects. Our method involves the analysis of student written scenario responses for level of scientific problem solving using a qualitative scoring rubric designed from participation in a related NSF project, SCALE (Synergy Communities: Aggregating Learning about Education). Student solutions of GEE-WIS anchor problems from Fall 2001 and Spring 2002 will be summarized. Implications are drawn for those interested in making secondary and high education geoscience more realistic and more motivating for students through the use of real-time authentic data via Internet.

  6. Toward Solving the Problem of Problem Solving: An Analysis Framework

    ERIC Educational Resources Information Center

    Roesler, Rebecca A.

    2016-01-01

    Teaching is replete with problem solving. Problem solving as a skill, however, is seldom addressed directly within music teacher education curricula, and research in music education has not examined problem solving systematically. A framework detailing problem-solving component skills would provide a needed foundation. I observed problem solving…

  7. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  8. Optimal bounds and extremal trajectories for time averages in dynamical systems

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Goluskin, David; Doering, Charles

    2017-11-01

    For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.

  9. Detection of lack of fusion using opaque additives

    NASA Technical Reports Server (NTRS)

    Cook, J. L.

    1973-01-01

    Reliable nondestructive inspection for incomplete weldment penetration and rapid oxidation of aluminum surfaces when exposed to the atmosphere are currently two major problems in welded aluminum spacecraft structure. Incomplete-penetration defects are extremely difficult to detect and can lead to catastrophic failure of the structure. The moisture absorbed by aluminum oxide on the surface can cause weldment porosity if the surface is not cleaned before welding. The approach employed in this program to solve both problems was to employ copper as a coating to prevent oxidation of the aluminum. Also, copper was used as an opaque additive in the weldment to enhance X-ray detection in the event of incomplete penetration.

  10. Dysfunctional attitudes and poor problem solving skills predict hopelessness in major depression.

    PubMed

    Cannon, B; Mulroy, R; Otto, M W; Rosenbaum, J F; Fava, M; Nierenberg, A A

    1999-09-01

    Hopelessness is a significant predictor of suicidality, but not all depressed patients feel hopeless. If clinicians can predict hopelessness, they may be able to identify those patients at risk of suicide and focus interventions on factors associated with hopelessness. In this study, we examined potential predictors of hopelessness in a sample of depressed outpatients. In this study, we examined potential demographic, diagnostic, and symptom predictors of hopelessness in a sample of 138 medication-free outpatients (73 women and 65 men) with a primary diagnosis of major depression. The significance of predictors was evaluated in both simple and multiple regression analyses. Consistent with previous studies, we found no significant associations between demographic and diagnostic variables and greater hopelessness. Hopelessness was significantly associated with greater depression severity, poor problem solving abilities as assessed by the Problem Solving Inventory, and each of two measures of dysfunctional cognitions (the Dysfunctional Attitudes Scale and the Cognitions Questionnaire). In a stepwise multiple regression equation, however, only dysfunctional cognitions and poor problem solving offered non-redundant prediction of hopelessness scores, and accounted for 20% of the variance in these scores. This study is based on depressed patients entering into an outpatient treatment protocol. All analyses were correlational in nature, and no causal links can be concluded. Our findings, identifying clinical correlates of hopelessness, provide clinicians with potential additional targets for assessment and treatment of suicidal risk. In particular, clinical attention to dysfunctional attitudes and problem solving skills may be important for further reduction of hopelessness and perhaps suicidal risk.

  11. Estimation of Surface Temperature and Heat Flux by Inverse Heat Transfer Methods Using Internal Temperatures Measured While Radiantly Heating a Carbon/Carbon Specimen up to 1920 F

    NASA Technical Reports Server (NTRS)

    Pizzo, Michelle; Daryabeigi, Kamran; Glass, David

    2015-01-01

    The ability to solve the heat conduction equation is needed when designing materials to be used on vehicles exposed to extremely high temperatures; e.g. vehicles used for atmospheric entry or hypersonic flight. When using test and flight data, computational methods such as finite difference schemes may be used to solve for both the direct heat conduction problem, i.e., solving between internal temperature measurements, and the inverse heat conduction problem, i.e., using the direct solution to march forward in space to the surface of the material to estimate both surface temperature and heat flux. The completed research first discusses the methods used in developing a computational code to solve both the direct and inverse heat transfer problems using one dimensional, centered, implicit finite volume schemes and one dimensional, centered, explicit space marching techniques. The developed code assumed the boundary conditions to be specified time varying temperatures and also considered temperature dependent thermal properties. The completed research then discusses the results of analyzing temperature data measured while radiantly heating a carbon/carbon specimen up to 1920 F. The temperature was measured using thermocouple (TC) plugs (small carbon/carbon material specimens) with four embedded TC plugs inserted into the larger carbon/carbon specimen. The purpose of analyzing the test data was to estimate the surface heat flux and temperature values from the internal temperature measurements using direct and inverse heat transfer methods, thus aiding in the thermal and structural design and analysis of high temperature vehicles.

  12. An evaluation of collision models in the Method of Moments for rarefied gas problems

    NASA Astrophysics Data System (ADS)

    Emerson, David; Gu, Xiao-Jun

    2014-11-01

    The Method of Moments offers an attractive approach for solving gaseous transport problems that are beyond the limit of validity of the Navier-Stokes-Fourier equations. Recent work has demonstrated the capability of the regularized 13 and 26 moment equations for solving problems when the Knudsen number, Kn (where Kn is the ratio of the mean free path of a gas to a typical length scale of interest), is in the range 0.1 and 1.0-the so-called transition regime. In comparison to numerical solutions of the Boltzmann equation, the Method of Moments has captured both qualitatively, and quantitatively, results of classical test problems in kinetic theory, e.g. velocity slip in Kramers' problem, temperature jump in Knudsen layers, the Knudsen minimum etc. However, most of these results have been obtained for Maxwell molecules, where molecules repel each other according to an inverse fifth-power rule. Recent work has incorporated more traditional collision models such as BGK, S-model, and ES-BGK, the latter being important for thermal problems where the Prandtl number can vary. We are currently investigating the impact of these collision models on fundamental low-speed problems of particular interest to micro-scale flows that will be discussed and evaluated in the presentation. Engineering and Physical Sciences Research Council under Grant EP/I011927/1 and CCP12.

  13. Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines

    NASA Astrophysics Data System (ADS)

    Traversa, Fabio L.; Di Ventra, Massimiliano

    2017-02-01

    We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.

  14. Goals and everyday problem solving: examining the link between age-related goals and problem-solving strategy use.

    PubMed

    Hoppmann, Christiane A; Coats, Abby Heckman; Blanchard-Fields, Fredda

    2008-07-01

    Qualitative interviews on family and financial problems from 332 adolescents, young, middle-aged, and older adults, demonstrated that developmentally relevant goals predicted problem-solving strategy use over and above problem domain. Four focal goals concerned autonomy, generativity, maintaining good relationships with others, and changing another person. We examined both self- and other-focused problem-solving strategies. Autonomy goals were associated with self-focused instrumental problem solving and generative goals were related to other-focused instrumental problem solving in family and financial problems. Goals of changing another person were related to other-focused instrumental problem solving in the family domain only. The match between goals and strategies, an indicator of problem-solving adaptiveness, showed that young individuals displayed the greatest match between autonomy goals and self-focused problem solving, whereas older adults showed a greater match between generative goals and other-focused problem solving. Findings speak to the importance of considering goals in investigations of age-related differences in everyday problem solving.

  15. Absolute calibration of the mass scale in the inverse problem of the physical theory of fireballs

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    1992-08-01

    A method of the absolute calibration of the mass scale is proposed for solving the inverse problem of the physical theory of fireballs. The method is based on data on the masses of fallen meteorites whose fireballs have been photographed in flight. The method can be applied to fireballs whose bodies have not experienced significant fragmentation during their flight in the atmosphere and have kept their shape relatively well. Data on the Lost City and Innisfree meteorites are used to calculate the calibration coefficients.

  16. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  17. Resources in Technology: Problem-Solving.

    ERIC Educational Resources Information Center

    Technology Teacher, 1986

    1986-01-01

    This instructional module examines a key function of science and technology: problem solving. It studies the meaning of problem solving, looks at techniques for problem solving, examines case studies that exemplify the problem-solving approach, presents problems for the reader to solve, and provides a student self-quiz. (Author/CT)

  18. Apollo 13 creativity: in-the-box innovation.

    PubMed

    King, M J

    1997-01-01

    A study of the Apollo 13 mission, based on the themes showcased in the acclaimed 1995 film, reveals the grace under pressure that is the condition of optimal creativity. "Apollo 13 Creativity" is a cultural and creative problem-solving appreciation of the thinking style that made the Apollo mission succeed: creativity under severe limitations. Although creativity is often considered a "luxury good," of concern mainly for personal enrichment, the arts, and performance improvement, in life-or-death situations it is the critical pathway not only to success but to survival. In this case. the original plan for a moon landing had to be transformed within a matter of hours into a return to earth. By precluding failure as an option at the outset, both space and ground crews were forced to adopt a new perspective on their resources and options to solve for a successful landing. This now-classic problem provides a range of principles for creative practice and motivation applicable in any situation. The extreme situation makes these points dramatically.

  19. Effects of van der Waals Force and Thermal Stresses on Pull-in Instability of Clamped Rectangular Microplates

    PubMed Central

    Batra, Romesh C.; Porfiri, Maurizio; Spinello, Davide

    2008-01-01

    We study the influence of von Kármán nonlinearity, van der Waals force, and thermal stresses on pull-in instability and small vibrations of electrostatically actuated microplates. We use the Galerkin method to develop a tractable reduced-order model for electrostatically actuated clamped rectangular microplates in the presence of van der Waals forces and thermal stresses. More specifically, we reduce the governing two-dimensional nonlinear transient boundary-value problem to a single nonlinear ordinary differential equation. For the static problem, the pull-in voltage and the pull-in displacement are determined by solving a pair of nonlinear algebraic equations. The fundamental vibration frequency corresponding to a deflected configuration of the microplate is determined by solving a linear algebraic equation. The proposed reduced-order model allows for accurately estimating the combined effects of van der Waals force and thermal stresses on the pull-in voltage and the pull-in deflection profile with an extremely limited computational effort. PMID:27879752

  20. Effects of van der Waals Force and Thermal Stresses on Pull-in Instability of Clamped Rectangular Microplates.

    PubMed

    Batra, Romesh C; Porfiri, Maurizio; Spinello, Davide

    2008-02-15

    We study the influence of von Karman nonlinearity, van der Waals force, and a athermal stresses on pull-in instability and small vibrations of electrostatically actuated mi-croplates. We use the Galerkin method to develop a tractable reduced-order model for elec-trostatically actuated clamped rectangular microplates in the presence of van der Waals forcesand thermal stresses. More specifically, we reduce the governing two-dimensional nonlineartransient boundary-value problem to a single nonlinear ordinary differential equation. For thestatic problem, the pull-in voltage and the pull-in displacement are determined by solving apair of nonlinear algebraic equations. The fundamental vibration frequency corresponding toa deflected configuration of the microplate is determined by solving a linear algebraic equa-tion. The proposed reduced-order model allows for accurately estimating the combined effectsof van der Waals force and thermal stresses on the pull-in voltage and the pull-in deflectionprofile with an extremely limited computational effort.

  1. The importance of situation-specific encodings: analysis of a simple connectionist model of letter transposition effects

    NASA Astrophysics Data System (ADS)

    Fang, Shin-Yi; Smith, Garrett; Tabor, Whitney

    2018-04-01

    This paper analyses a three-layer connectionist network that solves a translation-invariance problem, offering a novel explanation for transposed letter effects in word reading. Analysis of the hidden unit encodings provides insight into two central issues in cognitive science: (1) What is the novelty of claims of "modality-specific" encodings? and (2) How can a learning system establish a complex internal structure needed to solve a problem? Although these topics (embodied cognition and learnability) are often treated separately, we find a close relationship between them: modality-specific features help the network discover an abstract encoding by causing it to break the initial symmetries of the hidden units in an effective way. While this neural model is extremely simple compared to the human brain, our results suggest that neural networks need not be black boxes and that carefully examining their encoding behaviours may reveal how they differ from classical ideas about the mind-world relationship.

  2. The role of extreme orbits in the global organization of periodic regions in parameter space for one dimensional maps

    NASA Astrophysics Data System (ADS)

    da Costa, Diogo Ricardo; Hansen, Matheus; Guarise, Gustavo; Medrano-T, Rene O.; Leonel, Edson D.

    2016-04-01

    We show that extreme orbits, trajectories that connect local maximum and minimum values of one dimensional maps, play a major role in the parameter space of dissipative systems dictating the organization for the windows of periodicity, hence producing sets of shrimp-like structures. Here we solve three fundamental problems regarding the distribution of these sets and give: (i) their precise localization in the parameter space, even for sets of very high periods; (ii) their local and global distributions along cascades; and (iii) the association of these cascades to complicate sets of periodicity. The extreme orbits are proved to be a powerful indicator to investigate the organization of windows of periodicity in parameter planes. As applications of the theory, we obtain some results for the circle map and perturbed logistic map. The formalism presented here can be extended to many other different nonlinear and dissipative systems.

  3. Influence of Distributed Residential Energy Storage on Voltage in Rural Distribution Network and Capacity Configuration

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Tong, Yibin; Zhao, Zhigang; Zhang, Xuefen

    2018-03-01

    Large-scale access of distributed residential photovoltaic (PV) in rural areas has solved the voltage problem to a certain extent. However, due to the intermittency of PV and the particularity of rural residents’ power load, the problem of low voltage in the evening peak remains to be resolved. This paper proposes to solve the problem by accessing residential energy storage. Firstly, the influence of access location and capacity of energy storage on voltage distribution in rural distribution network is analyzed. Secondly, the relation between the storage capacity and load capacity is deduced for four typical load and energy storage cases when the voltage deviation meets the demand. Finally, the optimal storage position and capacity are obtained by using PSO and power flow simulation.

  4. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less

  5. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  6. Mesoscale modeling: solving complex flows in biology and biotechnology.

    PubMed

    Mills, Zachary Grant; Mao, Wenbin; Alexeev, Alexander

    2013-07-01

    Fluids are involved in practically all physiological activities of living organisms. However, biological and biorelated flows are hard to analyze due to the inherent combination of interdependent effects and processes that occur on a multitude of spatial and temporal scales. Recent advances in mesoscale simulations enable researchers to tackle problems that are central for the understanding of such flows. Furthermore, computational modeling effectively facilitates the development of novel therapeutic approaches. Among other methods, dissipative particle dynamics and the lattice Boltzmann method have become increasingly popular during recent years due to their ability to solve a large variety of problems. In this review, we discuss recent applications of these mesoscale methods to several fluid-related problems in medicine, bioengineering, and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Task-specific modulation of adult humans' tool preferences: number of choices and size of the problem.

    PubMed

    Silva, Kathleen M; Gross, Thomas J; Silva, Francisco J

    2015-03-01

    In two experiments, we examined the effect of modifications to the features of a stick-and-tube problem on the stick lengths that adult humans used to solve the problem. In Experiment 1, we examined whether people's tool preferences for retrieving an out-of-reach object in a tube might more closely resemble those reported with laboratory crows if people could modify a single stick to an ideal length to solve the problem. Contrary to when adult humans have selected a tool from a set of ten sticks, asking people to modify a single stick to retrieve an object did not generally result in a stick whose length was related to the object's distance. Consistent with the prior research, though, the working length of the stick was related to the object's distance. In Experiment 2, we examined the effect of increasing the scale of the stick-and-tube problem on people's tool preferences. Increasing the scale of the task influenced people to select relatively shorter tools than had selected in previous studies. Although the causal structures of the tasks used in the two experiments were identical, their results were not. This underscores the necessity of studying physical cognition in relation to a particular causal structure by using a variety of tasks and methods.

  8. Cascade Optimization Strategy Maximizes Thrust for High-Speed Civil Transport Propulsion System Concept

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The design of a High-Speed Civil Transport (HSCT) air-breathing propulsion system for multimission, variable-cycle operations was successfully optimized through a soft coupling of the engine performance analyzer NASA Engine Performance Program (NEPP) to a multidisciplinary optimization tool COMETBOARDS that was developed at the NASA Lewis Research Center. The design optimization of this engine was cast as a nonlinear optimization problem, with engine thrust as the merit function and the bypass ratios, r-values of fans, fuel flow, and other factors as important active design variables. Constraints were specified on factors including the maximum speed of the compressors, the positive surge margins for the compressors with specified safety factors, the discharge temperature, the pressure ratios, and the mixer extreme Mach number. Solving the problem by using the most reliable optimization algorithm available in COMETBOARDS would provide feasible optimum results only for a portion of the aircraft flight regime because of the large number of mission points (defined by altitudes, Mach numbers, flow rates, and other factors), diverse constraint types, and overall poor conditioning of the design space. Only the cascade optimization strategy of COMETBOARDS, which was devised especially for difficult multidisciplinary applications, could successfully solve a number of engine design problems for their flight regimes. Furthermore, the cascade strategy converged to the same global optimum solution even when it was initiated from different design points. Multiple optimizers in a specified sequence, pseudorandom damping, and reduction of the design space distortion via a global scaling scheme are some of the key features of the cascade strategy. HSCT engine concept, optimized solution for HSCT engine concept. A COMETBOARDS solution for an HSCT engine (Mach-2.4 mixed-flow turbofan) along with its configuration is shown. The optimum thrust is normalized with respect to NEPP results. COMETBOARDS added value in the design optimization of the HSCT engine.

  9. Down to the roughness scale assessment of piston-ring/liner contacts

    NASA Astrophysics Data System (ADS)

    Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.

    2017-02-01

    The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.

  10. Task-driven dictionary learning.

    PubMed

    Mairal, Julien; Bach, Francis; Ponce, Jean

    2012-04-01

    Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.

  11. A modified generalized extremal optimization algorithm for the quay crane scheduling problem with interference constraints

    NASA Astrophysics Data System (ADS)

    Guo, Peng; Cheng, Wenming; Wang, Yi

    2014-10-01

    The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.

  12. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  13. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  14. Developing a fluid intelligence scale through a combination of Rasch modeling and cognitive psychology.

    PubMed

    Primi, Ricardo

    2014-09-01

    Ability testing has been criticized because understanding of the construct being assessed is incomplete and because the testing has not yet been satisfactorily improved in accordance with new knowledge from cognitive psychology. This article contributes to the solution of this problem through the application of item response theory and Susan Embretson's cognitive design system for test development in the development of a fluid intelligence scale. This study is based on findings from cognitive psychology; instead of focusing on the development of a test, it focuses on the definition of a variable for the creation of a criterion-referenced measure for fluid intelligence. A geometric matrix item bank with 26 items was analyzed with data from 2,797 undergraduate students. The main result was a criterion-referenced scale that was based on information from item features that were linked to cognitive components, such as storage capacity, goal management, and abstraction; this information was used to create the descriptions of selected levels of a fluid intelligence scale. The scale proposed that the levels of fluid intelligence range from the ability to solve problems containing a limited number of bits of information with obvious relationships through the ability to solve problems that involve abstract relationships under conditions that are confounded with an information overload and distraction by mixed noise. This scale can be employed in future research to provide interpretations for the measurements of the cognitive processes mastered and the types of difficulty experienced by examinees. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  15. Large-scale brain network associated with creative insight: combined voxel-based morphometry and resting-state functional connectivity analyses.

    PubMed

    Ogawa, Takeshi; Aihara, Takatsugu; Shimokawa, Takeaki; Yamashita, Okito

    2018-04-24

    Creative insight occurs with an "Aha!" experience when solving a difficult problem. Here, we investigated large-scale networks associated with insight problem solving. We recruited 232 healthy participants aged 21-69 years old. Participants completed a magnetic resonance imaging study (MRI; structural imaging and a 10 min resting-state functional MRI) and an insight test battery (ITB) consisting of written questionnaires (matchstick arithmetic task, remote associates test, and insight problem solving task). To identify the resting-state functional connectivity (RSFC) associated with individual creative insight, we conducted an exploratory voxel-based morphometry (VBM)-constrained RSFC analysis. We identified positive correlations between ITB score and grey matter volume (GMV) in the right insula and middle cingulate cortex/precuneus, and a negative correlation between ITB score and GMV in the left cerebellum crus 1 and right supplementary motor area. We applied seed-based RSFC analysis to whole brain voxels using the seeds obtained from the VBM and identified insight-positive/negative connections, i.e. a positive/negative correlation between the ITB score and individual RSFCs between two brain regions. Insight-specific connections included motor-related regions whereas creative-common connections included a default mode network. Our results indicate that creative insight requires a coupling of multiple networks, such as the default mode, semantic and cerebral-cerebellum networks.

  16. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  17. A Cognitive Analysis of Students’ Mathematical Problem Solving Ability on Geometry

    NASA Astrophysics Data System (ADS)

    Rusyda, N. A.; Kusnandi, K.; Suhendra, S.

    2017-09-01

    The purpose of this research is to analyze of mathematical problem solving ability of students in one of secondary school on geometry. This research was conducted by using quantitative approach with descriptive method. Population in this research was all students of that school and the sample was twenty five students that was chosen by purposive sampling technique. Data of mathematical problem solving were collected through essay test. The results showed the percentage of achievement of mathematical problem solving indicators of students were: 1) solve closed mathematical problems with context in math was 50%; 2) solve the closed mathematical problems with the context beyond mathematics was 24%; 3) solving open mathematical problems with contexts in mathematics was 35%; And 4) solving open mathematical problems with contexts outside mathematics was 44%. Based on the percentage, it can be concluded that the level of achievement of mathematical problem solving ability in geometry still low. This is because students are not used to solving problems that measure mathematical problem solving ability, weaknesses remember previous knowledge, and lack of problem solving framework. So the students’ ability of mathematical problems solving need to be improved with implement appropriate learning strategy.

  18. Substructure of fuzzy dark matter haloes

    NASA Astrophysics Data System (ADS)

    Du, Xiaolong; Behrens, Christoph; Niemeyer, Jens C.

    2017-02-01

    We derive the halo mass function (HMF) for fuzzy dark matter (FDM) by solving the excursion set problem explicitly with a mass-dependent barrier function, which has not been done before. We find that compared to the naive approach of the Sheth-Tormen HMF for FDM, our approach has a higher cutoff mass and the cutoff mass changes less strongly with redshifts. Using merger trees constructed with a modified version of the Lacey & Cole formalism that accounts for suppressed small-scale power and the scale-dependent growth of FDM haloes and the semi-analytic GALACTICUS code, we study the statistics of halo substructure including the effects from dynamical friction and tidal stripping. We find that if the dark matter is a mixture of cold dark matter (CDM) and FDM, there will be a suppression on the halo substructure on small scales which may be able to solve the missing satellites problem faced by the pure CDM model. The suppression becomes stronger with increasing FDM fraction or decreasing FDM mass. Thus, it may be used to constrain the FDM model.

  19. Image aesthetic quality evaluation using convolution neural network embedded learning

    NASA Astrophysics Data System (ADS)

    Li, Yu-xin; Pu, Yuan-yuan; Xu, Dan; Qian, Wen-hua; Wang, Li-peng

    2017-11-01

    A way of embedded learning convolution neural network (ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.

  20. Acoustic streaming: an arbitrary Lagrangian-Eulerian perspective.

    PubMed

    Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco

    2017-08-25

    We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid-structure interaction problems in microacoustofluidic devices. After the formulation's exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches.

  1. Acoustic streaming: an arbitrary Lagrangian–Eulerian perspective

    PubMed Central

    Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco

    2017-01-01

    We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid–structure interaction problems in microacoustofluidic devices. After the formulation’s exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches. PMID:29051631

  2. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    NASA Astrophysics Data System (ADS)

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; Okui, Takemichi; Tsai, Yuhsinz

    2016-12-01

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.

  3. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less

  4. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    DOE PAGES

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; ...

    2016-12-21

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less

  5. Path changing methods applied to the 4-D guidance of STOL aircraft.

    DOT National Transportation Integrated Search

    1971-11-01

    Prior to the advent of large-scale commercial STOL service, some challenging navigation and guidance problems must be solved. Proposed terminal area operations may require that these aircraft be capable of accurately flying complex flight paths, and ...

  6. Optimization Under Uncertainty for Wake Steering Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, Julian; Annoni, Jennifer; King, Ryan N.

    Here, wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in themore » presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less

  7. Optimization Under Uncertainty for Wake Steering Strategies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, Julian; Annoni, Jennifer; King, Ryan N

    Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presencemore » of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less

  8. Optimization Under Uncertainty for Wake Steering Strategies

    NASA Astrophysics Data System (ADS)

    Quick, Julian; Annoni, Jennifer; King, Ryan; Dykes, Katherine; Fleming, Paul; Ning, Andrew

    2017-05-01

    Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as “wake steering,” in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.

  9. Optimization Under Uncertainty for Wake Steering Strategies

    DOE PAGES

    Quick, Julian; Annoni, Jennifer; King, Ryan N.; ...

    2017-06-13

    Here, wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in themore » presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less

  10. Xyce parallel electronic simulator users guide, version 6.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  11. Xyce parallel electronic simulator users' guide, Version 6.0.1.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  12. Xyce parallel electronic simulator users guide, version 6.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  13. Meshless Local Petrov-Galerkin Method for Solving Contact, Impact and Penetration Problems

    DTIC Science & Technology

    2006-11-30

    Crack Growth 3 point of view, this approach makes the full use of the ex- isting FE models to avoid any model regeneration , which is extremely high in...process, at point C, the pressure reduces to zero, but the volumet- ric strain does not go to zero due to the collapsed void volume. 2.2 Damage...lease rate to go beyond the critical strain energy release rate. Thus, the micro-cracks begin to growth inside these areas. At 10 micro-seconds, these

  14. Piping Connector

    NASA Technical Reports Server (NTRS)

    1993-01-01

    A complex of high pressure piping at Stennis Space Center carries rocket propellants and other fluids/gases through the Center's Component Test Facility. Conventional clamped connectors tend to leak when propellant lines are chilled to extremely low temperatures. Reflange, Inc. customized an existing piping connector to include a secondary seal more tolerant of severe thermal gradients for Stennis. The T-Con connector solved the problem, and the company is now marketing a commercial version that permits testing, monitoring or collecting any emissions that may escape the primary seal during severe thermal transition.

  15. Liquid Space Lubricants Examined by Vibrational Micro-Spectroscopy

    NASA Technical Reports Server (NTRS)

    Street, Kenneth W., Jr.

    2008-01-01

    Considerable effort has been expended to develop liquid lubricants for satellites and space exploration vehicles. These lubricants must often perform under a range of harsh conditions such as vacuum, radiation, and temperature extremes while in orbit or in transit and in extremely dusty environments at destinations such as the Moon and Mars. Historically, oil development was guided by terrestrial application, which did not provide adequate space lubricants. Novel fluids such as the perfluorinated polyethers provided some relief but are far from ideal. With each new fluid proposed to solve one problem, other problems have arisen. Much of the work performed at the National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) in elucidating the mechanisms by which chemical degradation of space oils occur has been done by vibrational micro-spectroscopic techniques such as infrared and Raman, which this review details. Presented are fundamental lubrication studies as well as actual case studies in which vibrational spectroscopy has led to millions of dollars in savings and potentially prevented loss of mission.

  16. Gravitational wave astronomy: needle in a haystack.

    PubMed

    Cornish, Neil J

    2013-02-13

    A worldwide array of highly sensitive ground-based interferometers stands poised to usher in a new era in astronomy with the first direct detection of gravitational waves. The data from these instruments will provide a unique perspective on extreme astrophysical objects, such as neutron stars and black holes, and will allow us to test Einstein's theory of gravity in the strong field, dynamical regime. To fully realize these goals, we need to solve some challenging problems in signal processing and inference, such as finding rare and weak signals that are buried in non-stationary and non-Gaussian instrument noise, dealing with high-dimensional model spaces, and locating what are often extremely tight concentrations of posterior mass within the prior volume. Gravitational wave detection using space-based detectors and pulsar timing arrays bring with them the additional challenge of having to isolate individual signals that overlap one another in both time and frequency. Promising solutions to these problems will be discussed, along with some of the challenges that remain.

  17. Learning biology through connecting mathematics to scientific mechanisms: Student outcomes and teacher supports

    NASA Astrophysics Data System (ADS)

    Schuchardt, Anita

    Integrating mathematics into science classrooms has been part of the conversation in science education for a long time. However, studies on student learning after incorporating mathematics in to the science classroom have shown mixed results. Understanding the mixed effects of including mathematics in science has been hindered by a historical focus on characteristics of integration tangential to student learning (e.g., shared elements, extent of integration). A new framework is presented emphasizing the epistemic role of mathematics in science. An epistemic role of mathematics missing from the current literature is identified: use of mathematics to represent scientific mechanisms, Mechanism Connected Mathematics (MCM). Building on prior theoretical work, it is proposed that having students develop mathematical equations that represent scientific mechanisms could elevate their conceptual understanding and quantitative problem solving. Following design and implementation of an MCM unit in inheritance, a large-scale quantitative analysis of pre and post implementation test results showed MCM students, compared to traditionally instructed students) had significantly greater gains in conceptual understanding of mathematically modeled scientific mechanisms, and their ability to solve complex quantitative problems. To gain insight into the mechanism behind the gain in quantitative problem solving, a small-scale qualitative study was conducted of two contrasting groups: 1) within-MCM instruction: competent versus struggling problem solvers, and 2) within-competent problem solvers: MCM instructed versus traditionally instructed. Competent MCM students tended to connect their mathematical inscriptions to the scientific phenomenon and to switch between mathematical and scientifically productive approaches during problem solving in potentially productive ways. The other two groups did not. To address concerns about teacher capacity presenting barriers to scalability of MCM approaches, the types and amount of teacher support needed to achieve these types of student learning gains were investigated. In the context of providing teachers with access to educative materials, students achieved learning gains in both areas in the absence of face-to-face teacher professional development. However, maximal student learning gains required the investment of face-to-face professional development. This finding can govern distribution of scarce resources, but does not preclude implementation of MCM instruction even where resource availability does not allow for face-to-face professional development.

  18. Information filtering via a scaling-based function.

    PubMed

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.

  19. First Constraints on Fuzzy Dark Matter from Lyman-α Forest Data and Hydrodynamical Simulations.

    PubMed

    Iršič, Vid; Viel, Matteo; Haehnelt, Martin G; Bolton, James S; Becker, George D

    2017-07-21

    We present constraints on the masses of extremely light bosons dubbed fuzzy dark matter (FDM) from Lyman-α forest data. Extremely light bosons with a de Broglie wavelength of ∼1  kpc have been suggested as dark matter candidates that may resolve some of the current small scale problems of the cold dark matter model. For the first time, we use hydrodynamical simulations to model the Lyman-α flux power spectrum in these models and compare it to the observed flux power spectrum from two different data sets: the XQ-100 and HIRES/MIKE quasar spectra samples. After marginalization over nuisance and physical parameters and with conservative assumptions for the thermal history of the intergalactic medium (IGM) that allow for jumps in the temperature of up to 5000 K, XQ-100 provides a lower limit of 7.1×10^{-22}  eV, HIRES/MIKE returns a stronger limit of 14.3×10^{-22}  eV, while the combination of both data sets results in a limit of 20×10^{-22}  eV (2σ C.L.). The limits for the analysis of the combined data sets increases to 37.5×10^{-22}  eV (2σ C.L.) when a smoother thermal history is assumed where the temperature of the IGM evolves as a power law in redshift. Light boson masses in the range 1-10×10^{-22}  eV are ruled out at high significance by our analysis, casting strong doubts that FDM helps solve the "small scale crisis" of the cold dark matter models.

  20. Increasing mathematical problem-solving performance through relaxation training

    NASA Astrophysics Data System (ADS)

    Sharp, Conni; Coltharp, Hazel; Hurford, David; Cole, Amykay

    2000-04-01

    Two intact classes of 30 undergraduate students enrolled in the same general education mathematics course were each administered the IPSP Mathematics Problem Solving Test and the Mathematics Anxiety Rating Scale at the beginning and end of the semester. Both groups experienced the same syllabus, lectures, course requirements, and assessment techniques; however, one group received relaxation training during an initial class meeting and during the first 5 to 7 minutes of each subsequent class. The group which had received relaxation training had significantly lower mathematics anxiety and significantly higher mathematics performance at the end of the course. The results suggest that relaxation training may be a useful tool for treating anxiety in undergraduate general education mathematics students.

Top