Sample records for limits solve systems

  1. EXPECT: Explicit Representations for Flexible Acquisition

    NASA Technical Reports Server (NTRS)

    Swartout, BIll; Gil, Yolanda

    1995-01-01

    To create more powerful knowledge acquisition systems, we not only need better acquisition tools, but we need to change the architecture of the knowledge based systems we create so that their structure will provide better support for acquisition. Current acquisition tools permit users to modify factual knowledge but they provide limited support for modifying problem solving knowledge. In this paper, the authors argue that this limitation (and others) stem from the use of incomplete models of problem-solving knowledge and inflexible specification of the interdependencies between problem-solving and factual knowledge. We describe the EXPECT architecture which addresses these problems by providing an explicit representation for problem-solving knowledge and intent. Using this more explicit representation, EXPECT can automatically derive the interdependencies between problem-solving and factual knowledge. By deriving these interdependencies from the structure of the knowledge-based system itself EXPECT supports more flexible and powerful knowledge acquisition.

  2. A General Architecture for Intelligent Tutoring of Diagnostic Classification Problem Solving

    PubMed Central

    Crowley, Rebecca S.; Medvedeva, Olga

    2003-01-01

    We report on a general architecture for creating knowledge-based medical training systems to teach diagnostic classification problem solving. The approach is informed by our previous work describing the development of expertise in classification problem solving in Pathology. The architecture envelops the traditional Intelligent Tutoring System design within the Unified Problem-solving Method description Language (UPML) architecture, supporting component modularity and reuse. Based on the domain ontology, domain task ontology and case data, the abstract problem-solving methods of the expert model create a dynamic solution graph. Student interaction with the solution graph is filtered through an instructional layer, which is created by a second set of abstract problem-solving methods and pedagogic ontologies, in response to the current state of the student model. We outline the advantages and limitations of this general approach, and describe it’s implementation in SlideTutor–a developing Intelligent Tutoring System in Dermatopathology. PMID:14728159

  3. Problem Solving Model for Science Learning

    NASA Astrophysics Data System (ADS)

    Alberida, H.; Lufri; Festiyed; Barlian, E.

    2018-04-01

    This research aims to develop problem solving model for science learning in junior high school. The learning model was developed using the ADDIE model. An analysis phase includes curriculum analysis, analysis of students of SMP Kota Padang, analysis of SMP science teachers, learning analysis, as well as the literature review. The design phase includes product planning a science-learning problem-solving model, which consists of syntax, reaction principle, social system, support system, instructional impact and support. Implementation of problem-solving model in science learning to improve students' science process skills. The development stage consists of three steps: a) designing a prototype, b) performing a formative evaluation and c) a prototype revision. Implementation stage is done through a limited trial. A limited trial was conducted on 24 and 26 August 2015 in Class VII 2 SMPN 12 Padang. The evaluation phase was conducted in the form of experiments at SMPN 1 Padang, SMPN 12 Padang and SMP National Padang. Based on the development research done, the syntax model problem solving for science learning at junior high school consists of the introduction, observation, initial problems, data collection, data organization, data analysis/generalization, and communicating.

  4. Development of a Preventive HIV Vaccine Requires Solving Inverse Problems Which Is Unattainable by Rational Vaccine Design

    PubMed Central

    Van Regenmortel, Marc H. V.

    2018-01-01

    Hypotheses and theories are essential constituents of the scientific method. Many vaccinologists are unaware that the problems they try to solve are mostly inverse problems that consist in imagining what could bring about a desired outcome. An inverse problem starts with the result and tries to guess what are the multiple causes that could have produced it. Compared to the usual direct scientific problems that start with the causes and derive or calculate the results using deductive reasoning and known mechanisms, solving an inverse problem uses a less reliable inductive approach and requires the development of a theoretical model that may have different solutions or none at all. Unsuccessful attempts to solve inverse problems in HIV vaccinology by reductionist methods, systems biology and structure-based reverse vaccinology are described. The popular strategy known as rational vaccine design is unable to solve the multiple inverse problems faced by HIV vaccine developers. The term “rational” is derived from “rational drug design” which uses the 3D structure of a biological target for designing molecules that will selectively bind to it and inhibit its biological activity. In vaccine design, however, the word “rational” simply means that the investigator is concentrating on parts of the system for which molecular information is available. The economist and Nobel laureate Herbert Simon introduced the concept of “bounded rationality” to explain why the complexity of the world economic system makes it impossible, for instance, to predict an event like the financial crash of 2007–2008. Humans always operate under unavoidable constraints such as insufficient information, a limited capacity to process huge amounts of data and a limited amount of time available to reach a decision. Such limitations always prevent us from achieving the complete understanding and optimization of a complex system that would be needed to achieve a truly rational design process. This is why the complexity of the human immune system prevents us from rationally designing an HIV vaccine by solving inverse problems. PMID:29387066

  5. Some Solved Problems with the SLAC PEP-II B-Factory Beam-Position Monitor System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Ronald G.

    2000-05-05

    The Beam-Position Monitor (BPM) system for the SLAC PEP-II B-Factory has been in operation for over two years. Although the BPM system has met all of its specifications, several problems with the system have been identified and solved. The problems include errors and limitations in both the hardware and software. Solutions of such problems have led to improved performance and reliability. In this paper the authors report on this experience. The process of identifying problems is not at an end and they expect continued improvement of the BPM system.

  6. W-algebra for solving problems with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Shevlyakov, A. O.; Matveev, M. G.

    2018-03-01

    A method of solving the problems with fuzzy parameters by means of a special algebraic structure is proposed. The structure defines its operations through operations on real numbers, which simplifies its use. It avoids deficiencies limiting applicability of the other known structures. Examples for solution of a quadratic equation, a system of linear equations and a network planning problem are given.

  7. Beyond rules: The next generation of expert systems

    NASA Technical Reports Server (NTRS)

    Ferguson, Jay C.; Wagner, Robert E.

    1987-01-01

    The PARAGON Representation, Management, and Manipulation system is introduced. The concepts of knowledge representation, knowledge management, and knowledge manipulation are combined in a comprehensive system for solving real world problems requiring high levels of expertise in a real time environment. In most applications the complexity of the problem and the representation used to describe the domain knowledge tend to obscure the information from which solutions are derived. This inhibits the acquisition of domain knowledge verification/validation, places severe constraints on the ability to extend and maintain a knowledge base while making generic problem solving strategies difficult to develop. A unique hybrid system was developed to overcome these traditional limitations.

  8. PAN AIR summary document (version 1.0)

    NASA Technical Reports Server (NTRS)

    Derbyshire, T.; Sidwell, K. W.

    1982-01-01

    The capabilities and limitations of the panel aerodynamics (PAN AIR) computer program system are summarized. This program uses a higher order panel method to solve boundary value problems involving the Prandtl-Glauert equation for subsonic and supersonic potential flows. Both aerodynamic and hydrodynamic problems can be solved using this modular software which is written for the CDC 6600 and 7600, and the CYBER 170 series computers.

  9. Network Polymers Formed Under Nonideal Conditions.

    DTIC Science & Technology

    1986-12-01

    the system or the limited ability of the statistical model to account for stochastic correlations. The viscosity of the reacting system was measured as...based on competing reactions (ring, chain) and employs equilibrium chain statistics . The work thus far has been limited to single cycle growth on an...polymerizations, because a large number of differential equations must be solved. The Makovian approach (sometimes referred to as the statistical or

  10. Minimization of transmission cost in decentralized control systems

    NASA Technical Reports Server (NTRS)

    Wang, S.-H.; Davison, E. J.

    1978-01-01

    This paper considers the problem of stabilizing a linear time-invariant multivariable system by using local feedback controllers and some limited information exchange among local stations. The problem of achieving a given degree of stability with minimum transmission cost is solved.

  11. Improving the Energy Market: Algorithms, Market Implications, and Transmission Switching

    NASA Astrophysics Data System (ADS)

    Lipka, Paula Ann

    This dissertation aims to improve ISO operations through a better real-time market solution algorithm that directly considers both real and reactive power, finds a feasible Alternating Current Optimal Power Flow solution, and allows for solving transmission switching problems in an AC setting. Most of the IEEE systems do not contain any thermal limits on lines, and the ones that do are often not binding. Chapter 3 modifies the thermal limits for the IEEE systems to create new, interesting test cases. Algorithms created to better solve the power flow problem often solve the IEEE cases without line limits. However, one of the factors that makes the power flow problem hard is thermal limits on the lines. The transmission networks in practice often have transmission lines that become congested, and it is unrealistic to ignore line limits. Modifying the IEEE test cases makes it possible for other researchers to be able to test their algorithms on a setup that is closer to the actual ISO setup. This thesis also examines how to convert limits given on apparent power---as is in the case in the Polish test systems---to limits on current. The main consideration in setting line limits is temperature, which linearly relates to current. Setting limits on real or apparent power is actually a proxy for using the limits on current. Therefore, Chapter 3 shows how to convert back to the best physical representation of line limits. A sequential linearization of the current-voltage formulation of the Alternating Current Optimal Power Flow (ACOPF) problem is used to find an AC-feasible generator dispatch. In this sequential linearization, there are parameters that are set to the previous optimal solution. Additionally, to improve accuracy of the Taylor series approximations that are used, the movement of the voltage is restricted. The movement of the voltage is allowed to be very large at the first iteration and is restricted further on each subsequent iteration, with the restriction corresponding to the accuracy and AC-feasiblity of the solution. This linearization was tested on the IEEE and Polish systems, which range from 14 to 3375 buses and 20 to 4161 transmission lines. It had an accuracy of 0.5% or less for all but the 30-bus system. It also solved in linear time with CPLEX, while the non-linear version solved in O(n1.11) to O(n1.39). The sequential linearization is slower than the nonlinear formulation for smaller problems, but faster for larger problems, and its linear computational time means it would continue solving faster for larger problems. A major consideration to implementing algorithms to solve the optimal generator dispatch is ensuring that the resulting prices from the algorithm will support the market. Since the sequential linearization is linear, it is convex, its marginal values are well-defined, and there is no duality gap. The prices and settlements obtained from the sequential linearization therefore can be used to run a market. This market will include extra prices and settlements for reactive power and voltage, compared to the present-day market, which is based on real power. An advantage of this is that there is a very clear pool that can be used for reactive power/voltage support payments, while presently there is not a clear pool to take them out of. This method also reveals how valuable reactive power and voltage are at different locations, which can enable better planning of reactive resource construction. Transmission switching increases the feasible region of the generator dispatch, which means there may be a better solution than without transmission switching. Power flows on transmission lines are not directly controllable; rather, the power flows according to how it is injected and the physical characteristics of the lines. Changing the network topology changes the physical characteristics, which changes the flows. This means that sets of generator dispatch that may have previously been infeasible due to the flow exceeding line constraints may be feasible, since the flows will be different and may meet line constraints. However, transmission switching is a mixed integer problem, which may have a very slow solution time. For economic switching, we examine a series of heuristics. We examine the congestion rent heuristic in detail and then examine many other heuristics at a higher level. Post-contingency corrective switching aims to fix issues in the power network after a line or generator outage. In Chapter 7, we show that using the sequential linear program with corrective switching helps solve voltage and excessive flow issues. (Abstract shortened by UMI.).

  12. The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations

    NASA Astrophysics Data System (ADS)

    Rudmin, Joseph W.

    2001-04-01

    The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations Joseph W. Rudmin (Physics Dept, James Madison University) A new system of solving systems of differential equations will be presented, which has been developed by J. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces MacClaurin Series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. The method yields high-degree solutions: 20th degree is easily obtainable. It is conceptually simple, fast, and extremely general. It has been applied to over a hundred systems of differential equations, some of which were previously unsolved, and has yet to fail to solve any system for which the MacClaurin series converges. The method is non-recursive: each coefficient in the series is calculated just once, in closed form, and its accuracy is limited only by the digital accuracy of the computer. Although the original differential equations may include any mathematical functions, the computational method includes ONLY the operations of addition, subtraction, and multiplication. Furthermore, it is perfectly suited to parallel -processing computer languages. Those who learn this system will never use Runge-Kutta or predictor-corrector methods again. Examples will be presented, including the classical many-body problem.

  13. Numerics made easy: solving the Navier-Stokes equation for arbitrary channel cross-sections using Microsoft Excel.

    PubMed

    Richter, Christiane; Kotz, Frederik; Giselbrecht, Stefan; Helmer, Dorothea; Rapp, Bastian E

    2016-06-01

    The fluid mechanics of microfluidics is distinctively simpler than the fluid mechanics of macroscopic systems. In macroscopic systems effects such as non-laminar flow, convection, gravity etc. need to be accounted for all of which can usually be neglected in microfluidic systems. Still, there exists only a very limited selection of channel cross-sections for which the Navier-Stokes equation for pressure-driven Poiseuille flow can be solved analytically. From these equations, velocity profiles as well as flow rates can be calculated. However, whenever a cross-section is not highly symmetric (rectangular, elliptical or circular) the Navier-Stokes equation can usually not be solved analytically. In all of these cases, numerical methods are required. However, in many instances it is not necessary to turn to complex numerical solver packages for deriving, e.g., the velocity profile of a more complex microfluidic channel cross-section. In this paper, a simple spreadsheet analysis tool (here: Microsoft Excel) will be used to implement a simple numerical scheme which allows solving the Navier-Stokes equation for arbitrary channel cross-sections.

  14. Students' Dilemmas in Reaction Stoichiometry Problem Solving: Deducing the Limiting Reagent in Chemical Reactions

    ERIC Educational Resources Information Center

    Chandrasegaran, A. L.; Treagust, David F.; Waldrip, Bruce G.; Chandrasegaran, Antonia

    2009-01-01

    A qualitative case study was conducted to investigate the understanding of the limiting reagent concept and the strategies used by five Year 11 students when solving four reaction stoichiometry problems. Students' written problem-solving strategies were studied using the think-aloud protocol during problem-solving, and retrospective verbalisations…

  15. Vocabulary and Experiences to Develop a Center of Mass Model

    ERIC Educational Resources Information Center

    Kaar, Taylor; Pollack, Linda B.; Lerner, Michael E.; Engels, Robert J.

    2017-01-01

    The use of systems in many introductory courses is limited and often implicit. Modeling two or more objects as a system and tracking the center of mass of that system is usually not included. Thinking in terms of the center of mass facilitates problem solving while exposing the importance of using conservation laws. We present below three…

  16. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  17. Redesigning the Quantum Mechanics Curriculum to Incorporate Problem Solving Using a Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.

    1999-10-01

    One of the traditional obstacles to learning quantum mechanics is the relatively high level of mathematical proficiency required to solve even routine problems. Modern computer algebra systems are now sufficiently reliable that they can be used as mathematical assistants to alleviate this difficulty. In the quantum mechanics course at the University of Lethbridge, the traditional three lecture hours per week have been replaced by two lecture hours and a one-hour computer-aided problem solving session using a computer algebra system (Maple). While this somewhat reduces the number of topics that can be tackled during the term, students have a better opportunity to familiarize themselves with the underlying theory with this course design. Maple is also available to students during examinations. The use of a computer algebra system expands the class of feasible problems during a time-limited exercise such as a midterm or final examination. A modern computer algebra system is a complex piece of software, so some time needs to be devoted to teaching the students its proper use. However, the advantages to the teaching of quantum mechanics appear to outweigh the disadvantages.

  18. Maximum capacity model of grid-connected multi-wind farms considering static security constraints in electrical grids

    NASA Astrophysics Data System (ADS)

    Zhou, W.; Qiu, G. Y.; Oodo, S. O.; He, H.

    2013-03-01

    An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.

  19. Efficient Instant Search

    ERIC Educational Resources Information Center

    Ji, Shengyue

    2011-01-01

    Traditional information systems return answers after a user submits a complete query. Users often feel "left in the dark" when they have limited knowledge about the underlying data, and have to use a try-and-see approach for finding information. The trend of supporting autocomplete in these systems is a first step towards solving this problem. A…

  20. Web-GIS based information management system to Bureau of Law Enforcement for Urban Managementenforcement for urban management

    NASA Astrophysics Data System (ADS)

    Sun, Hai; Wang, Cheng; Ren, Bo

    2007-06-01

    Daily works of Law Enforcement Bureau are crucial in the urban management. However, with the development of the city, the information and data which are relative to Law Enforcement Bureau's daily work are increasing and updating. The increasing data result in that some traditional work is limited and inefficient in daily work. Analyzing the demands and obstacles of Law Enforcement Bureau, the paper proposes a new method to solve these problems. A web-GIS based information management system was produced for Bureau of Law Enforcement for Urban Management of Foshan. First part of the paper provides an overview of the system. Second part introduces the architecture of system and data organization. In the third part, the paper describes the design and implement of functional modules detailedly. In the end, this paper is concluded and proposes some strategic recommendations for the further development of the system. This paper focuses on the architecture and implementation of the system, solves the developing issues based on ArcServer, and introduces a new concept to the local government to solve the current problems. Practical application of this system showed that it played very important role in the Law Enforcement Bureau's work.

  1. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  2. Math and numeracy in young adults with spina bifida and hydrocephalus.

    PubMed

    Dennis, Maureen; Barnes, Marcia

    2002-01-01

    The developmental stability of poor math skill was studied in 31 young adults with spina bifida and hydrocephalus (SBH), a neurodevelopmental disorder involving malformations of the brain and spinal cord. Longitudinally, individuals with poor math problem solving as children grew into adults with poor problem solving and limited functional numeracy. As a group, young adults with SBH had poor computation accuracy, computation speed, problem solving, a ndfunctional numeracy. Computation accuracy was related to a supporting cognitive system (working memory for numbers), and functional numeracy was related to one medical history variable (number of lifetime shunt revisions). Adult functional numeracy, but not functional literacy, was predictive of higher levels of social, personal, and community independence.

  3. Power Distribution System Planning with GIS Consideration

    NASA Astrophysics Data System (ADS)

    Wattanasophon, Sirichai; Eua-Arporn, Bundhit

    This paper proposes a method for solving radial distribution system planning problems taking into account geographical information. The proposed method can automatically determine appropriate location and size of a substation, routing of feeders, and sizes of conductors while satisfying all constraints, i.e. technical constraints (voltage drop and thermal limit) and geographical constraints (obstacle, existing infrastructure, and high-cost passages). Sequential quadratic programming (SQP) and minimum path algorithm (MPA) are applied to solve the planning problem based on net price value (NPV) consideration. In addition this method integrates planner's experience and optimization process to achieve an appropriate practical solution. The proposed method has been tested with an actual distribution system, from which the results indicate that it can provide satisfactory plans.

  4. Solving the Swath Segment Selection Problem

    NASA Technical Reports Server (NTRS)

    Knight, Russell; Smith, Benjamin

    2006-01-01

    Several artificial-intelligence search techniques have been tested as means of solving the swath segment selection problem (SSSP) -- a real-world problem that is not only of interest in its own right, but is also useful as a test bed for search techniques in general. In simplest terms, the SSSP is the problem of scheduling the observation times of an airborne or spaceborne synthetic-aperture radar (SAR) system to effect the maximum coverage of a specified area (denoted the target), given a schedule of downlinks (opportunities for radio transmission of SAR scan data to a ground station), given the limit on the quantity of SAR scan data that can be stored in an onboard memory between downlink opportunities, and given the limit on the achievable downlink data rate. The SSSP is NP complete (short for "nondeterministic polynomial time complete" -- characteristic of a class of intractable problems that can be solved only by use of computers capable of making guesses and then checking the guesses in polynomial time).

  5. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  6. Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression

    NASA Astrophysics Data System (ADS)

    Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.

    2018-05-01

    Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.

  7. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  8. Combined fast multipole-QR compression technique for solving electrically small to large structures for broadband applications

    NASA Technical Reports Server (NTRS)

    Jandhyala, Vikram (Inventor); Chowdhury, Indranil (Inventor)

    2011-01-01

    An approach that efficiently solves for a desired parameter of a system or device that can include both electrically large fast multipole method (FMM) elements, and electrically small QR elements. The system or device is setup as an oct-tree structure that can include regions of both the FMM type and the QR type. An iterative solver is then used to determine a first matrix vector product for any electrically large elements, and a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large elements and the electrically small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter.

  9. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  10. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    In this paper the implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are the MPP, an SIMD machine with 16-Kbit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the Flex/32 and Cray/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally conclusions are presented.

  11. Effective Control of Computationally Simulated Wing Rock in Subsonic Flow

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.; Menzies, Margaret A.

    1997-01-01

    The unsteady compressible, full Navier-Stokes (NS) equations and the Euler equations of rigid-body dynamics are sequentially solved to simulate the delta wing rock phenomenon. The NS equations are solved time accurately, using the implicit, upwind, Roe flux-difference splitting, finite-volume scheme. The rigid-body dynamics equations are solved using a four-stage Runge-Kutta scheme. Once the wing reaches the limit-cycle response, an active control model using a mass injection system is applied from the wing surface to suppress the limit-cycle oscillation. The active control model is based on state feedback and the control law is established using pole placement techniques. The control law is based on the feedback of two states: the roll-angle and roll velocity. The primary model of the computational applications consists of a 80 deg swept, sharp edged, delta wing at 30 deg angle of attack in a freestream of Mach number 0.1 and Reynolds number of 0.4 x 10(exp 6). With a limit-cycle roll amplitude of 41.1 deg, the control model is applied, and the results show that within one and one half cycles of oscillation, the wing roll amplitude and velocity are brought to zero.

  12. Hybrid robust predictive optimization method of power system dispatch

    DOEpatents

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  13. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  14. Automation technology for aerospace power management

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1982-01-01

    The growing size and complexity of spacecraft power systems coupled with limited space/ground communications necessitate increasingly automated onboard control systems. Research in computer science, particularly artificial intelligence has developed methods and techniques for constructing man-machine systems with problem-solving expertise in limited domains which may contribute to the automation of power systems. Since these systems perform tasks which are typically performed by human experts they have become known as Expert Systems. A review of the current state of the art in expert systems technology is presented, and potential applications in power systems management are considered. It is concluded that expert systems appear to have significant potential for improving the productivity of operations personnel in aerospace applications, and in automating the control of many aerospace systems.

  15. HFL-10 lifting body flight control system characteristics and operational experience

    NASA Technical Reports Server (NTRS)

    Painter, W. D.; Sitterle, G. J.

    1974-01-01

    A flight evaluation was made of the mechanical hydraulic flight control system and the electrohydraulic stability augmentation system installed in the HL-10 lifting body research vehicle. Flight tests performed in the speed range from landing to a Mach number of 1.86 and the altitude range from 697 meters (2300 feet) to 27,550 meters (90,300 feet) were supplemented by ground tests to identify and correct structural resonance and limit-cycle problems. Severe limit-cycle and control sensitivity problems were encountered during the first flight. Stability augmentation system structural resonance electronic filters were modified to correct the limit-cycle problem. Several changes were made to control stick gearing to solve the control sensitivity problem. Satisfactory controllability was achieved by using a nonlinear system. A limit-cycle problem due to hydraulic fluid contamination was encountered during the first powered flight, but the problem did not recur after preflight operations were improved.

  16. Identifying barriers to recovery from work related upper extremity disorders: use of a collaborative problem solving technique.

    PubMed

    Shaw, William S; Feuerstein, Michael; Miller, Virginia I; Wood, Patricia M

    2003-08-01

    Improving health and work outcomes for individuals with work related upper extremity disorders (WRUEDs) may require a broad assessment of potential return to work barriers by engaging workers in collaborative problem solving. In this study, half of all nurse case managers from a large workers' compensation system were randomly selected and invited to participate in a randomized, controlled trial of an integrated case management (ICM) approach for WRUEDs. The focus of ICM was problem solving skills training and workplace accommodation. Volunteer nurses attended a 2 day ICM training workshop including instruction in a 6 step process to engage clients in problem solving to overcome barriers to recovery. A chart review of WRUED case management reports (n = 70) during the following 2 years was conducted to extract case managers' reports of barriers to recovery and return to work. Case managers documented from 0 to 21 barriers per case (M = 6.24, SD = 4.02) within 5 domains: signs and symptoms (36%), work environment (27%), medical care (13%), functional limitations (12%), and coping (12%). Compared with case managers who did not receive the training (n = 67), workshop participants identified more barriers related to signs and symptoms, work environment, functional limitations, and coping (p < .05), but not to medical care. Problem solving skills training may help focus case management services on the most salient recovery factors affecting return to work.

  17. The limitations of mathematical modeling in high school physics education

    NASA Astrophysics Data System (ADS)

    Forjan, Matej

    The theme of the doctoral dissertation falls within the scope of didactics of physics. Theoretical analysis of the key constraints that occur in the transmission of mathematical modeling of dynamical systems into field of physics education in secondary schools is presented. In an effort to explore the extent to which current physics education promotes understanding of models and modeling, we analyze the curriculum and the three most commonly used textbooks for high school physics. We focus primarily on the representation of the various stages of modeling in the solved tasks in textbooks and on the presentation of certain simplifications and idealizations, which are in high school physics frequently used. We show that one of the textbooks in most cases fairly and reasonably presents the simplifications, while the other two half of the analyzed simplifications do not explain. It also turns out that the vast majority of solved tasks in all the textbooks do not explicitly represent model assumptions based on what we can conclude that in high school physics the students do not develop sufficiently a sense of simplification and idealizations, which is a key part of the conceptual phase of modeling. For the introduction of modeling of dynamical systems the knowledge of students is also important, therefore we performed an empirical study on the extent to which high school students are able to understand the time evolution of some dynamical systems in the field of physics. The research results show the students have a very weak understanding of the dynamics of systems in which the feedbacks are present. This is independent of the year or final grade in physics and mathematics. When modeling dynamical systems in high school physics we also encounter the limitations which result from the lack of mathematical knowledge of students, because they don't know how analytically solve the differential equations. We show that when dealing with one-dimensional dynamical systems geometrical approach to solving differential equations is appropriate, while in dynamical systems of higher dimensions mathematical constraints are avoided by using a graphical oriented programs for modeling. Because in dealing with dynamical systems with four or more dimensions we may encounter problems in numerical solving, we also show how to overcome them. In the case of electrostatic pendulum we show the process of modeling the real dynamical system and we put a particular emphasize on the different phases of modeling and on the way of overcoming constraints on which we encounter in the development of the model.

  18. A review on economic emission dispatch problems using quantum computational intelligence

    NASA Astrophysics Data System (ADS)

    Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.

    2016-11-01

    Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.

  19. Engineering neural systems for high-level problem solving.

    PubMed

    Sylvester, Jared; Reggia, James

    2016-07-01

    There is a long-standing, sometimes contentious debate in AI concerning the relative merits of a symbolic, top-down approach vs. a neural, bottom-up approach to engineering intelligent machine behaviors. While neurocomputational methods excel at lower-level cognitive tasks (incremental learning for pattern classification, low-level sensorimotor control, fault tolerance and processing of noisy data, etc.), they are largely non-competitive with top-down symbolic methods for tasks involving high-level cognitive problem solving (goal-directed reasoning, metacognition, planning, etc.). Here we take a step towards addressing this limitation by developing a purely neural framework named galis. Our goal in this work is to integrate top-down (non-symbolic) control of a neural network system with more traditional bottom-up neural computations. galis is based on attractor networks that can be "programmed" with temporal sequences of hand-crafted instructions that control problem solving by gating the activity retention of, communication between, and learning done by other neural networks. We demonstrate the effectiveness of this approach by showing that it can be applied successfully to solve sequential card matching problems, using both human performance and a top-down symbolic algorithm as experimental controls. Solving this kind of problem makes use of top-down attention control and the binding together of visual features in ways that are easy for symbolic AI systems but not for neural networks to achieve. Our model can not only be instructed on how to solve card matching problems successfully, but its performance also qualitatively (and sometimes quantitatively) matches the performance of both human subjects that we had perform the same task and the top-down symbolic algorithm that we used as an experimental control. We conclude that the core principles underlying the galis framework provide a promising approach to engineering purely neurocomputational systems for problem-solving tasks that in people require higher-level cognitive functions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Isolating Flow-field Discontinuities while Preserving Monotonicity and High-order Accuracy on Cartesian Meshes

    DTIC Science & Technology

    2017-01-09

    2017 Distribution A – Approved for public release; Distribution Unlimited. PA Clearance 17030 Introduction • Filtering schemes offer a less...dissipative alternative to the standard artificial dissipation operators when applied to high- order spatial/temporal schemes • Limiting Fact: Filters impart...systems require a preconditioned dual-time framework to be solved efficiently • Limiting Fact: Filtering cannot be applied only at the physical- time

  1. A high-accuracy optical linear algebra processor for finite element applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  2. Lattice enumeration for inverse molecular design using the signature descriptor.

    PubMed

    Martin, Shawn

    2012-07-23

    We describe an inverse quantitative structure-activity relationship (QSAR) framework developed for the design of molecular structures with desired properties. This framework uses chemical fragments encoded with a molecular descriptor known as a signature. It solves a system of linear constrained Diophantine equations to reorganize the fragments into novel molecular structures. The method has been previously applied to problems in drug and materials design but has inherent computational limitations due to the necessity of solving the Diophantine constraints. We propose a new approach to overcome these limitations using the Fincke-Pohst algorithm for lattice enumeration. We benchmark the new approach against previous results on LFA-1/ICAM-1 inhibitory peptides, linear homopolymers, and hydrofluoroether foam blowing agents. Software implementing the new approach is available at www.cs.otago.ac.nz/homepages/smartin.

  3. Achieving the Heisenberg limit in quantum metrology using quantum error correction.

    PubMed

    Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang

    2018-01-08

    Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.

  4. Current Advances and Future Directions in Behavior Assessment

    ERIC Educational Resources Information Center

    Riley-Tillman, T. Chris; Johnson, Austin H.

    2017-01-01

    Multi-tiered problem-solving models that focus on promoting positive outcomes for student behavior continue to be emphasized within educational research. Although substantial work has been conducted to support systems-level implementation and intervention for behavior, concomitant advances in behavior assessment have been limited. This is despite…

  5. Situational Awareness During Mass-Casualty Events: Command and Control

    PubMed Central

    Demchak, Barry; Chan, Theordore C.; Griswold, William G.; Lenert, Leslie

    2006-01-01

    In existing Incident Command systems1, situational awareness is achieved manually through paper tracking systems. Such systems often produce high latencies and incomplete data, resulting in inefficient and ineffective resource deployment. The WIISARD2 system collects much more data than a paper-based system, dramatically reducing latency while increasing the kinds and quality of information available to Incident Commanders. The WIISARD Command Center solves the problem of data overload and uncertainty through the careful use of limited screen area and novel visualization techniques. PMID:17238524

  6. Numerical Solution of the Gyrokinetic Poisson Equation in TEMPEST

    NASA Astrophysics Data System (ADS)

    Dorr, Milo; Cohen, Bruce; Cohen, Ronald; Dimits, Andris; Hittinger, Jeffrey; Kerbel, Gary; Nevins, William; Rognlien, Thomas; Umansky, Maxim; Xiong, Andrew; Xu, Xueqiao

    2006-10-01

    The gyrokinetic Poisson (GKP) model in the TEMPEST continuum gyrokinetic edge plasma code yields the electrostatic potential due to the charge density of electrons and an arbitrary number of ion species including the effects of gyroaveraging in the limit kρ1. The TEMPEST equations are integrated as a differential algebraic system involving a nonlinear system solve via Newton-Krylov iteration. The GKP preconditioner block is inverted using a multigrid preconditioned conjugate gradient (CG) algorithm. Electrons are treated as kinetic or adiabatic. The Boltzmann relation in the adiabatic option employs flux surface averaging to maintain neutrality within field lines and is solved self-consistently with the GKP equation. A decomposition procedure circumvents the near singularity of the GKP Jacobian block that otherwise degrades CG convergence.

  7. The use of MACSYMA for solving elliptic boundary value problems

    NASA Technical Reports Server (NTRS)

    Thejll, Peter; Gilbert, Robert P.

    1990-01-01

    A boundary method is presented for the solution of elliptic boundary value problems. An approach based on the use of complete systems of solutions is emphasized. The discussion is limited to the Dirichlet problem, even though the present method can possibly be adapted to treat other boundary value problems.

  8. Extending the Educational Planning Discourse: Conceptual and Paradigmatic Explorations.

    ERIC Educational Resources Information Center

    Adams, Don

    1988-01-01

    Argues that rational, functionalist models of educational planning that conceptualize decision-making as an algorithmic process are relevant to a limited number of educational problems. Suggests that educational questions pertaining to goals, needs, equity, and quality must be solved with soft systems thinking and its interpretivist and relativist…

  9. Translation: Aids, Robots, and Automation.

    ERIC Educational Resources Information Center

    Andreyewsky, Alexander

    1981-01-01

    Examines electronic aids to translation both as ways to automate it and as an approach to solve problems resulting from shortage of qualified translators. Describes the limitations of robotic MT (Machine Translation) systems, viewing MAT (Machine-Aided Translation) as the only practical solution and the best vehicle for further automation. (MES)

  10. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  11. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  12. Lions (Panthera leo) solve, learn, and remember a novel resource acquisition problem.

    PubMed

    Borrego, Natalia; Dowling, Brian

    2016-09-01

    The social intelligence hypothesis proposes that the challenges of complex social life bolster the evolution of intelligence, and accordingly, advanced cognition has convergently evolved in several social lineages. Lions (Panthera leo) offer an ideal model system for cognitive research in a highly social species with an egalitarian social structure. We investigated cognition in lions using a novel resource task: the suspended puzzle box. The task required lions (n = 12) to solve a novel problem, learn the techniques used to solve the problem, and remember techniques for use in future trials. The majority of lions demonstrated novel problem-solving and learning; lions (11/12) solved the task, repeated success in multiple trials, and significantly reduced the latency to success across trials. Lions also demonstrated cognitive abilities associated with memory and solved the task after up to a 7-month testing interval. We also observed limited evidence for social facilitation of the task solution. Four of five initially unsuccessful lions achieved success after being partnered with a successful lion. Overall, our results support the presence of cognition associated with novel problem-solving, learning, and memory in lions. To date, our study is only the second experimental investigation of cognition in lions and further supports expanding cognitive research to lions.

  13. Teaching Problem Solving Skills to Elementary Age Students with Autism

    ERIC Educational Resources Information Center

    Cote, Debra L.; Jones, Vita L.; Barnett, Crystal; Pavelek, Karin; Nguyen, Hoang; Sparks, Shannon L.

    2014-01-01

    Students with disabilities need problem-solving skills to promote their success in solving the problems of daily life. The research into problem-solving instruction has been limited for students with autism. Using a problem-solving intervention and the Self Determined Learning Model of Instruction, three elementary age students with autism were…

  14. Expert system applications for army vehicle diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halle, R.F.

    1987-01-01

    Bulky manuals, limited training procedures, and complex Automatic Test Equipment are but a few of the problems a mechanic must face when trying to repair many of the military's new and highly complex vehicle systems. Recent technological advances in Expert Systms has given the mechanic the potential to solve many of these problems and to actually enhance his maintenance proficiency. This paper describes both the history of and the future potential of the Expert System and how it could impact on the present military maintenance system.

  15. Available Transfer Capability Determination Using Hybrid Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Jirapong, Peeraool; Ongsakul, Weerakorn

    2008-10-01

    This paper proposes a new hybrid evolutionary algorithm (HEA) based on evolutionary programming (EP), tabu search (TS), and simulated annealing (SA) to determine the available transfer capability (ATC) of power transactions between different control areas in deregulated power systems. The optimal power flow (OPF)-based ATC determination is used to evaluate the feasible maximum ATC value within real and reactive power generation limits, line thermal limits, voltage limits, and voltage and angle stability limits. The HEA approach simultaneously searches for real power generations except slack bus in a source area, real power loads in a sink area, and generation bus voltages to solve the OPF-based ATC problem. Test results on the modified IEEE 24-bus reliability test system (RTS) indicate that ATC determination by the HEA could enhance ATC far more than those from EP, TS, hybrid TS/SA, and improved EP (IEP) algorithms, leading to an efficient utilization of the existing transmission system.

  16. Problem-Solving Phase Transitions During Team Collaboration.

    PubMed

    Wiltshire, Travis J; Butner, Jonathan E; Fiore, Stephen M

    2018-01-01

    Multiple theories of problem-solving hypothesize that there are distinct qualitative phases exhibited during effective problem-solving. However, limited research has attempted to identify when transitions between phases occur. We integrate theory on collaborative problem-solving (CPS) with dynamical systems theory suggesting that when a system is undergoing a phase transition it should exhibit a peak in entropy and that entropy levels should also relate to team performance. Communications from 40 teams that collaborated on a complex problem were coded for occurrence of problem-solving processes. We applied a sliding window entropy technique to each team's communications and specified criteria for (a) identifying data points that qualify as peaks and (b) determining which peaks were robust. We used multilevel modeling, and provide a qualitative example, to evaluate whether phases exhibit distinct distributions of communication processes. We also tested whether there was a relationship between entropy values at transition points and CPS performance. We found that a proportion of entropy peaks was robust and that the relative occurrence of communication codes varied significantly across phases. Peaks in entropy thus corresponded to qualitative shifts in teams' CPS communications, providing empirical evidence that teams exhibit phase transitions during CPS. Also, lower average levels of entropy at the phase transition points predicted better CPS performance. We specify future directions to improve understanding of phase transitions during CPS, and collaborative cognition, more broadly. Copyright © 2017 Cognitive Science Society, Inc.

  17. Expert system technology

    NASA Technical Reports Server (NTRS)

    Prince, Mary Ellen

    1987-01-01

    The expert system is a computer program which attempts to reproduce the problem-solving behavior of an expert, who is able to view problems from a broad perspective and arrive at conclusions rapidly, using intuition, shortcuts, and analogies to previous situations. Expert systems are a departure from the usual artificial intelligence approach to problem solving. Researchers have traditionally tried to develop general modes of human intelligence that could be applied to many different situations. Expert systems, on the other hand, tend to rely on large quantities of domain specific knowledge, much of it heuristic. The reasoning component of the system is relatively simple and straightforward. For this reason, expert systems are often called knowledge based systems. The report expands on the foregoing. Section 1 discusses the architecture of a typical expert system. Section 2 deals with the characteristics that make a problem a suitable candidate for expert system solution. Section 3 surveys current technology, describing some of the software aids available for expert system development. Section 4 discusses the limitations of the latter. The concluding section makes predictions of future trends.

  18. Price schedules coordination for electricity pool markets

    NASA Astrophysics Data System (ADS)

    Legbedji, Alexis Motto

    2002-04-01

    We consider the optimal coordination of a class of mathematical programs with equilibrium constraints, which is formally interpreted as a resource-allocation problem. Many decomposition techniques were proposed to circumvent the difficulty of solving large systems with limited computer resources. The considerable improvement in computer architecture has allowed the solution of large-scale problems with increasing speed. Consequently, interest in decomposition techniques has waned. Nonetheless, there is an important class of applications for which decomposition techniques will still be relevant, among others, distributed systems---the Internet, perhaps, being the most conspicuous example---and competitive economic systems. Conceptually, a competitive economic system is a collection of agents that have similar or different objectives while sharing the same system resources. In theory, constructing a large-scale mathematical program and solving it centrally, using currently available computing power can optimize such systems of agents. In practice, however, because agents are self-interested and not willing to reveal some sensitive corporate data, one cannot solve these kinds of coordination problems by simply maximizing the sum of agent's objective functions with respect to their constraints. An iterative price decomposition or Lagrangian dual method is considered best suited because it can operate with limited information. A price-directed strategy, however, can only work successfully when coordinating or equilibrium prices exist, which is not generally the case when a weak duality is unavoidable. Showing when such prices exist and how to compute them is the main subject of this thesis. Among our results, we show that, if the Lagrangian function of a primal program is additively separable, price schedules coordination may be attained. The prices are Lagrange multipliers, and are also the decision variables of a dual program. In addition, we propose a new form of augmented or nonlinear pricing, which is an example of the use of penalty functions in mathematical programming. Applications are drawn from mathematical programming problems of the form arising in electric power system scheduling under competition.

  19. Double-Slit Interference Pattern for a Macroscopic Quantum System

    NASA Astrophysics Data System (ADS)

    Naeij, Hamid Reza; Shafiee, Afshin

    2016-12-01

    In this study, we solve analytically the Schrödinger equation for a macroscopic quantum oscillator as a central system coupled to two environmental micro-oscillating particles. Then, the double-slit interference patterns are investigated in two limiting cases, considering the limits of uncertainty in the position probability distribution. Moreover, we analyze the interference patterns based on a recent proposal called stochastic electrodynamics with spin. Our results show that when the quantum character of the macro-system is decreased, the diffraction pattern becomes more similar to a classical one. We also show that, depending on the size of the slits, the predictions of quantum approach could be apparently different with those of the aforementioned stochastic description.

  20. Exact solution of matricial Φ23 quantum field theory

    NASA Astrophysics Data System (ADS)

    Grosse, Harald; Sako, Akifumi; Wulkenhaar, Raimar

    2017-12-01

    We apply a recently developed method to exactly solve the Φ3 matrix model with covariance of a two-dimensional theory, also known as regularised Kontsevich model. Its correlation functions collectively describe graphs on a multi-punctured 2-sphere. We show how Ward-Takahashi identities and Schwinger-Dyson equations lead in a special large- N limit to integral equations that we solve exactly for all correlation functions. The solved model arises from noncommutative field theory in a special limit of strong deformation parameter. The limit defines ordinary 2D Schwinger functions which, however, do not satisfy reflection positivity.

  1. Bringing Installation Art to Reconnaissance to Share Values and Generate Action

    ERIC Educational Resources Information Center

    Townsend, Andrew; Thomson, Pat

    2015-01-01

    The English education system has recently seen something of a revival of enthusiasm for the use of research both to develop educational practices and to gather evidence about their effectiveness. These initiatives often present action research as a model of individual problem-solving, which, we argue, communicates a limited conception of action…

  2. Designing GIS Learning Materials for K-12 Teachers

    ERIC Educational Resources Information Center

    Hong, Jung Eun

    2017-01-01

    Although previous studies have proven the usefulness and effectiveness of geographic information system (GIS) use in the K-12 classroom, the rate of teacher adoption remains low. The identified major barrier to its use is a lack of teachers' background and experience. To solve this limitation, many organisations have provided GIS-related teacher…

  3. Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models

    PubMed Central

    2017-01-01

    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927

  4. An overview of adaptive model theory: solving the problems of redundancy, resources, and nonlinear interactions in human movement control.

    PubMed

    Neilson, Peter D; Neilson, Megan D

    2005-09-01

    Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.

  5. Research on Retro-reflecting Modulation in Space Optical Communication System

    NASA Astrophysics Data System (ADS)

    Zhu, Yifeng; Wang, Guannan

    2018-01-01

    Retro-reflecting modulation space optical communication is a new type of free space optical communication technology. Unlike traditional free space optical communication system, it applys asymmetric optical systems to reduce the size, weight and power consumption of the system and can effectively solve the limits of traditional free space optical communication system application, so it can achieve the information transmission. This paper introduces the composition and working principle of retro-reflecting modulation optical communication system, analyzes the link budget of this system, reviews the types of optical system and optical modulator, summarizes this technology future research direction and application prospects.

  6. FRAMES-2.0 Software System: Providing Password Protection and Limited Access to Models and Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; Pelton, Mitch A.

    2007-08-09

    One of the most important concerns for regulatory agencies is the concept of reproducibility (i.e., reproducibility means credibility) of an assessment. One aspect of reproducibility deals with tampering of the assessment. In other words, when multiple groups are engaged in an assessment, it is important to lock down the problem that is to be solved and/or to restrict the models that are to be used to solve the problem. The objective of this effort is to provide the U.S. Nuclear Regulatory Commission (NRC) with a means to limit user access to models and to provide a mechanism to constrain themore » conceptual site models (CSMs) when appropriate. The purpose is to provide the user (i.e., NRC) with the ability to “lock down” the CSM (i.e., picture containing linked icons), restrict access to certain models, or both.« less

  7. Evolution of the concentration PDF in random environments modeled by global random walk

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.

  8. P1 Nonconforming Finite Element Method for the Solution of Radiation Transport Problems

    NASA Technical Reports Server (NTRS)

    Kang, Kab S.

    2002-01-01

    The simulation of radiation transport in the optically thick flux-limited diffusion regime has been identified as one of the most time-consuming tasks within large simulation codes. Due to multimaterial complex geometry, the radiation transport system must often be solved on unstructured grids. In this paper, we investigate the behavior and the benefits of the unstructured P(sub 1) nonconforming finite element method, which has proven to be flexible and effective on related transport problems, in solving unsteady implicit nonlinear radiation diffusion problems using Newton and Picard linearization methods. Key words. nonconforrning finite elements, radiation transport, inexact Newton linearization, multigrid preconditioning

  9. Online system for knowledge assessment enhances students' results on school knowledge test.

    PubMed

    Kralj, Benjamin; Glazar, Sasa Aleksej

    2013-01-01

    Variety of online tools were built to help assessing students' performance in school. Many teachers changed their methods of assessment from paper-and-pencil (P&P) to online systems. In this study we analyse the influence that using an online system for knowledge assessment has on students' knowledge. Based on both a literature study and our own research we designed and built an online system for knowledge assessment. The system is evaluated using two groups of primary school teachers and students (N = 686) in Slovenia: an experimental and a control group. Students solved P&P exams on several occasions. The experimental group was allowed to access the system either at school or at home for a limited period during the presentation of a selected school topic. Students in the experimental group were able to solve tasks and compare their own achievements with those of their coevals. A comparison of the P&P school exams results achieved by both groups revealed a positive effect on subject topic comprehension for those with access to the online self-assessment system.

  10. Assessment of a 3-D boundary layer code to predict heat transfer and flow losses in a turbine

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.

    1984-01-01

    Zonal concepts are utilized to delineate regions of application of three-dimensional boundary layer (DBL) theory. The zonal approach requires three distinct analyses. A modified version of the 3-DBL code named TABLET is used to analyze the boundary layer flow. This modified code solves the finite difference form of the compressible 3-DBL equations in a nonorthogonal surface coordinate system which includes coriolis forces produced by coordinate rotation. These equations are solved using an efficient, implicit, fully coupled finite difference procedure. The nonorthogonal surface coordinate system is calculated using a general analysis based on the transfinite mapping of Gordon which is valid for any arbitrary surface. Experimental data is used to determine the boundary layer edge conditions. The boundary layer edge conditions are determined by integrating the boundary layer edge equations, which are the Euler equations at the edge of the boundary layer, using the known experimental wall pressure distribution. Starting solutions along the inflow boundaries are estimated by solving the appropriate limiting form of the 3-DBL equations.

  11. Steady flow model user's guide

    NASA Astrophysics Data System (ADS)

    Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.

    1984-07-01

    Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.

  12. From problem solving to problem definition: scrutinizing the complex nature of clinical practice.

    PubMed

    Cristancho, Sayra; Lingard, Lorelei; Regehr, Glenn

    2017-02-01

    In medical education, we have tended to present problems as being singular, stable, and solvable. Problem solving has, therefore, drawn much of medical education researchers' attention. This focus has been important but it is limited in terms of preparing clinicians to deal with the complexity of the 21st century healthcare system in which they will provide team-based care for patients with complex medical illness. In this paper, we use the Soft Systems Engineering principles to introduce the idea that in complex, team-based situations, problems usually involve divergent views and evolve with multiple solution iterations. As such we need to shift the conversation from (1) problem solving to problem definition, and (2) from a problem definition derived exclusively at the level of the individual to a definition derived at the level of the situation in which the problem is manifested. Embracing such a focus on problem definition will enable us to advocate for novel educational practices that will equip trainees to effectively manage the problems they will encounter in complex, team-based healthcare.

  13. The Reliability and Construct Validity of Scores on the Attitudes toward Problem Solving Scale

    ERIC Educational Resources Information Center

    Zakaria, Effandi; Haron, Zolkepeli; Daud, Md Yusoff

    2004-01-01

    The Attitudes Toward Problem Solving Scale (ATPSS) has received limited attention concerning its reliability and validity with a Malaysian secondary education population. Developed by Charles, Lester & O'Daffer (1987), the instruments assessed attitudes toward problem solving in areas of Willingness to Engage in Problem Solving Activities,…

  14. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  15. Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-10-15

    The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-ordermore » differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.« less

  16. Analysis and Assessment of Operation Risk for Hybrid AC/DC Power System based on the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Hu, Xiaojing; Li, Qiang; Zhang, Hao; Guo, Ziming; Zhao, Kun; Li, Xinpeng

    2018-06-01

    Based on the Monte Carlo method, an improved risk assessment method for hybrid AC/DC power system with VSC station considering the operation status of generators, converter stations, AC lines and DC lines is proposed. According to the sequential AC/DC power flow algorithm, node voltage and line active power are solved, and then the operation risk indices of node voltage over-limit and line active power over-limit are calculated. Finally, an improved two-area IEEE RTS-96 system is taken as a case to analyze and assessment its operation risk. The results show that the proposed model and method can intuitively and directly reflect the weak nodes and weak lines of the system, which can provide some reference for the dispatching department.

  17. A satellite mobile communication system based on Band-Limited Quasi-Synchronous Code Division Multiple Access (BLQS-CDMA)

    NASA Technical Reports Server (NTRS)

    Degaudenzi, R.; Elia, C.; Viola, R.

    1990-01-01

    Discussed here is a new approach to code division multiple access applied to a mobile system for voice (and data) services based on Band Limited Quasi Synchronous Code Division Multiple Access (BLQS-CDMA). The system requires users to be chip synchronized to reduce the contribution of self-interference and to make use of voice activation in order to increase the satellite power efficiency. In order to achieve spectral efficiency, Nyquist chip pulse shaping is used with no detection performance impairment. The synchronization problems are solved in the forward link by distributing a master code, whereas carrier forced activation and closed loop control techniques have been adopted in the return link. System performance sensitivity to nonlinear amplification and timing/frequency synchronization errors are analyzed.

  18. Homotopy approach to optimal, linear quadratic, fixed architecture compensation

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1991-01-01

    Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.

  19. Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns

    NASA Technical Reports Server (NTRS)

    Shaeffer, John

    2008-01-01

    Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.

  20. Perspectives on Problem Solving and Instruction

    ERIC Educational Resources Information Center

    van Merrienboer, Jeroen J. G.

    2013-01-01

    Most educators claim that problem solving is important, but they take very different perspective on it and there is little agreement on how it should be taught. This article aims to sort out the different perspectives and discusses problem solving as a goal, a method, and a skill. As a goal, problem solving should not be limited to well-structured…

  1. The Effects of a Problem Solving Intervention on Problem Solving Skills of Students with Autism during Vocational Tasks

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza

    2013-01-01

    Problem solving is an important employability skill and considered valuable both in educational settings (Agran & Alper, 2000) and the workplace (Ju, Zhang, & Pacha, 2012). However, limited research exists instructing students with autism to engage in problem solving skills (e.g., Bernard-Opitz, Sriram, & Nakhoda-Sapuan, 2001). The…

  2. Using Coaching to Improve the Teaching of Problem Solving to Year 8 Students in Mathematics

    ERIC Educational Resources Information Center

    Kargas, Christine Anestis; Stephens, Max

    2014-01-01

    This study investigated how to improve the teaching of problem solving in a large Melbourne secondary school. Coaching was used to support and equip five teachers, some with limited experiences in teaching problem solving, with knowledge and strategies to build up students' problem solving and reasoning skills. The results showed increased…

  3. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  4. Extending self-organizing particle systems to problem solving.

    PubMed

    Rodríguez, Alejandro; Reggia, James A

    2004-01-01

    Self-organizing particle systems consist of numerous autonomous, purely reflexive agents ("particles") whose collective movements through space are determined primarily by local influences they exert upon one another. Inspired by biological phenomena (bird flocking, fish schooling, etc.), particle systems have been used not only for biological modeling, but also increasingly for applications requiring the simulation of collective movements such as computer-generated animation. In this research, we take some first steps in extending particle systems so that they not only move collectively, but also solve simple problems. This is done by giving the individual particles (agents) a rudimentary intelligence in the form of a very limited memory and a top-down, goal-directed control mechanism that, triggered by appropriate conditions, switches them between different behavioral states and thus different movement dynamics. Such enhanced particle systems are shown to be able to function effectively in performing simulated search-and-collect tasks. Further, computational experiments show that collectively moving agent teams are more effective than similar but independently moving ones in carrying out such tasks, and that agent teams of either type that split off members of the collective to protect previously acquired resources are most effective. This work shows that the reflexive agents of contemporary particle systems can readily be extended to support goal-directed problem solving while retaining their collective movement behaviors. These results may prove useful not only for future modeling of animal behavior, but also in computer animation, coordinated movement control in robotic teams, particle swarm optimization, and computer games.

  5. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  6. A minimally-resolved immersed boundary model for reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar

    2013-12-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.

  7. Demonstration of quantum advantage in machine learning

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  8. Characterization of bending EAP beams

    NASA Technical Reports Server (NTRS)

    Bao, Xiaoqi; Bar-Cohen, Yoseph; Chang, Zensheu; Sherrit, Stewart

    2004-01-01

    Electroactive polymers are attractive actuation materials because of their large deformation, flexibility, and lightweight. A CCD camera system was constructed to record the curved shapes of bending during the activation of EAP films and image-processing software was developed to digitize the bending curves. A computer program was developed to solve the invese problem of cantilever EAP beams with tip position limiter. using the developed program and acquired curves without tip position limiter as well as the corresponding tip force, the EAP material properties of voltage-strain sensitivity and Young's modulus were determined.

  9. The divine clockwork: Bohr's correspondence principle and Nelson's stochastic mechanics for the atomic elliptic state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durran, Richard; Neate, Andrew; Truman, Aubrey

    2008-03-15

    We consider the Bohr correspondence limit of the Schroedinger wave function for an atomic elliptic state. We analyze this limit in the context of Nelson's stochastic mechanics, exposing an underlying deterministic dynamical system in which trajectories converge to Keplerian motion on an ellipse. This solves the long standing problem of obtaining Kepler's laws of planetary motion in a quantum mechanical setting. In this quantum mechanical setting, local mild instabilities occur in the Keplerian orbit for eccentricities greater than (1/{radical}(2)) which do not occur classically.

  10. Online thesis guidance management information system

    NASA Astrophysics Data System (ADS)

    Nasution, T. H.; Pratama, F.; Tanjung, K.; Siregar, I.; Amalia, A.

    2018-03-01

    The development of internet technology in education is still not maximized, especially in the process of thesis guidance between students and lecturers. Difficulties met the lecturers to help students during thesis guidance is the limited communication time and the compatibility of schedule between students and lecturer. To solve this problem, we designed an online thesis guidance management information system that helps students and lecturers to do thesis tutoring process anytime, anywhere. The system consists of a web-based admin app for usage management and an android-based app for students and lecturers.

  11. Modeling the ion transfer and polarization of ion exchange membranes in bioelectrochemical systems.

    PubMed

    Harnisch, Falk; Warmbier, Robert; Schneider, Ralf; Schröder, Uwe

    2009-06-01

    An explicit numerical model for the charge balancing ion transfer across monopolar ion exchange membranes under conditions of bioelectrochemical systems is presented. Diffusion and migration equations have been solved according to the Nernst-Planck Equation and the resulting ion concentrations, pH values and the resistance values of the membrane for different conditions were computed. The modeling results underline the principle limitations of the application of ion exchange membranes in biological fuel cells and electrolyzers, caused by the inherent occurrence of a pH-gradient between anode and cathode compartment, and an increased ohmic membrane resistance at decreasing electrolyte concentrations. Finally, the physical and numerical limitations of the model are discussed.

  12. Solving nonlinear equilibrium equations of deformable systems by method of embedded polygons

    NASA Astrophysics Data System (ADS)

    Razdolsky, A. G.

    2017-09-01

    Solving of nonlinear algebraic equations is an obligatory stage of studying the equilibrium paths of nonlinear deformable systems. The iterative method for solving a system of nonlinear algebraic equations stated in an explicit or implicit form is developed in the present work. The method consists of constructing a sequence of polygons in Euclidean space that converge into a single point that displays the solution of the system. Polygon vertices are determined on the assumption that individual equations of the system are independent from each other and each of them is a function of only one variable. Initial positions of vertices for each subsequent polygon are specified at the midpoints of certain straight segments determined at the previous iteration. The present algorithm is applied for analytical investigation of the behavior of biaxially compressed nonlinear-elastic beam-column with an open thin-walled cross-section. Numerical examples are made for the I-beam-column on the assumption that its material follows a bilinear stress-strain diagram. A computer program based on the shooting method is developed for solving the problem. The method is reduced to numerical integration of a system of differential equations and to the solution of a system of nonlinear algebraic equations between the boundary values of displacements at the ends of the beam-column. A stress distribution at the beam-column cross-sections is determined by subdividing the cross-section area into many small cells. The equilibrium path for the twisting angle and the lateral displacements tend to the stationary point when the load is increased. Configuration of the path curves reveals that the ultimate load is reached shortly once the maximal normal stresses at the beam-column fall outside the limit of the elastic region. The beam-column has a unique equilibrium state for each value of the load, that is, there are no equilibrium states once the maximum load is reached.

  13. Enabling X-ray free electron laser crystallography for challenging biological systems from a limited number of crystals

    DOE PAGES

    Uervirojnangkoorn, Monarin; Zeldin, Oliver B.; Lyubimov, Artem Y.; ...

    2015-03-17

    There is considerable potential for X-ray free electron lasers (XFELs) to enable determination of macromolecular crystal structures that are difficult to solve using current synchrotron sources. Prior XFEL studies often involved the collection of thousands to millions of diffraction images, in part due to limitations of data processing methods. We implemented a data processing system based on classical post-refinement techniques, adapted to specific properties of XFEL diffraction data. When applied to XFEL data from three different proteins collected using various sample delivery systems and XFEL beam parameters, our method improved the quality of the diffraction data as well as themore » resulting refined atomic models and electron density maps. Moreover, the number of observations for a reflection necessary to assemble an accurate data set could be reduced to a few observations. In conclusion, these developments will help expand the applicability of XFEL crystallography to challenging biological systems, including cases where sample is limited.« less

  14. Enabling X-ray free electron laser crystallography for challenging biological systems from a limited number of crystals

    DOE PAGES

    Uervirojnangkoorn, Monarin; Zeldin, Oliver B.; Lyubimov, Artem Y.; ...

    2015-03-17

    There is considerable potential for X-ray free electron lasers (XFELs) to enable determination of macromolecular crystal structures that are difficult to solve using current synchrotron sources. Prior XFEL studies often involved the collection of thousands to millions of diffraction images, in part due to limitations of data processing methods. We implemented a data processing system based on classical post-refinement techniques, adapted to specific properties of XFEL diffraction data. When applied to XFEL data from three different proteins collected using various sample delivery systems and XFEL beam parameters, our method improved the quality of the diffraction data as well as themore » resulting refined atomic models and electron density maps. Moreover, the number of observations for a reflection necessary to assemble an accurate data set could be reduced to a few observations. These developments will help expand the applicability of XFEL crystallography to challenging biological systems, including cases where sample is limited.« less

  15. Enabling X-ray free electron laser crystallography for challenging biological systems from a limited number of crystals

    PubMed Central

    Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Lyubimov, Artem Y; Hattne, Johan; Brewster, Aaron S; Sauter, Nicholas K; Brunger, Axel T; Weis, William I

    2015-01-01

    There is considerable potential for X-ray free electron lasers (XFELs) to enable determination of macromolecular crystal structures that are difficult to solve using current synchrotron sources. Prior XFEL studies often involved the collection of thousands to millions of diffraction images, in part due to limitations of data processing methods. We implemented a data processing system based on classical post-refinement techniques, adapted to specific properties of XFEL diffraction data. When applied to XFEL data from three different proteins collected using various sample delivery systems and XFEL beam parameters, our method improved the quality of the diffraction data as well as the resulting refined atomic models and electron density maps. Moreover, the number of observations for a reflection necessary to assemble an accurate data set could be reduced to a few observations. These developments will help expand the applicability of XFEL crystallography to challenging biological systems, including cases where sample is limited. DOI: http://dx.doi.org/10.7554/eLife.05421.001 PMID:25781634

  16. How does PET/MR work? Basic physics for physicians.

    PubMed

    Delso, Gaspar; Ter Voert, Edwin; Veit-Haibach, Patrick

    2015-08-01

    The aim of this article is to provide Radiologists and Nuclear Medicine physicians the basic information required to understand how PET/MR scanners work, what are their limitations and how to evaluate their performance. It will cover the operational principles of standalone PET and MR imaging, as well as the technical challenges of creating a hybrid system and how they have been solved in the now commercially available scanners. Guidelines will be provided to interpret the main performance figures of hybrid PET/MR systems.

  17. The Parker-Sochacki Method of Solving Differential Equations: Applications and Limitations

    NASA Astrophysics Data System (ADS)

    Rudmin, Joseph W.

    2006-11-01

    The Parker-Sochacki method is a powerful but simple technique of solving systems of differential equations, giving either analytical or numerical results. It has been in use for about 10 years now since its discovery by G. Edgar Parker and James Sochacki of the James Madison University Dept. of Mathematics and Statistics. It is being presented here because it is still not widely known and can benefit the listeners. It is a method of rapidly generating the Maclauren series to high order, non-iteratively. It has been successfully applied to more than a hundred systems of equations, including the classical many-body problem. Its advantages include its speed of calculation, its simplicity, and the fact that it uses only addition, subtraction and multiplication. It is not just a polynomial approximation, because it yields the Maclaurin series, and therefore exhibits the advantages and disadvantages of that series. A few applications will be presented.

  18. Wind Power Ramping Product for Increasing Power System Flexibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Mingjian; Zhang, Jie; Wu, Hongyu

    With increasing penetrations of wind power, system operators are concerned about a potential lack of system flexibility and ramping capacity in real-time dispatch stages. In this paper, a modified dispatch formulation is proposed considering the wind power ramping product (WPRP). A swinging door algorithm (SDA) and dynamic programming are combined and used to detect WPRPs in the next scheduling periods. The detected WPRPs are included in the unit commitment (UC) formulation considering ramping capacity limits, active power limits, and flexible ramping requirements. The modified formulation is solved by mixed integer linear programming. Numerical simulations on a modified PJM 5-bus Systemmore » show the effectiveness of the model considering WPRP, which not only reduces the production cost but also does not affect the generation schedules of thermal units.« less

  19. Concerning an application of the method of least squares with a variable weight matrix

    NASA Technical Reports Server (NTRS)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  20. Teaching Mathematics Problem Solving to Students with Limited English Proficiency through Nested Spiral Approach.

    ERIC Educational Resources Information Center

    Chyu, Chi-Oy W.

    The Nested Spiral Approach (NSA) is an integrated instructional approach used to promote the motivated learning of mathematics problem solving in limited-English-proficient (LEP) students. The NSA is described and a trial use is discussed. The approach extends, elaborates, and supplements existing education and instruction theories to help LEP…

  1. Effective algorithm for solving complex problems of production control and of material flows control of industrial enterprise

    NASA Astrophysics Data System (ADS)

    Mezentsev, Yu A.; Baranova, N. V.

    2018-05-01

    A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.

  2. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    PubMed

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  3. The development and evaluation of a web-based programme to support problem-solving skills following brain injury.

    PubMed

    Powell, Laurie Ehlhardt; Wild, Michelle R; Glang, Ann; Ibarra, Summer; Gau, Jeff M; Perez, Amanda; Albin, Richard W; O'Neil-Pirozzi, Therese M; Wade, Shari L; Keating, Tom; Saraceno, Carolyn; Slocumb, Jody

    2017-10-24

    Cognitive impairments following brain injury, including difficulty with problem solving, can pose significant barriers to successful community reintegration. Problem-solving strategy training is well-supported in the cognitive rehabilitation literature. However, limitations in insurance reimbursement have resulted in fewer services to train such skills to mastery and to support generalization of those skills into everyday environments. The purpose of this project was to develop and evaluate an integrated, web-based programme, ProSolv, which uses a small number of coaching sessions to support problem solving in everyday life following brain injury. We used participatory action research to guide the iterative development, usability testing, and within-subject pilot testing of the ProSolv programme. The finalized programme was then evaluated in a between-subjects group study and a non-experimental single case study. Results were mixed across studies. Participants demonstrated that it was feasible to learn and use the ProSolv programme for support in problem solving. They highly recommended the programme to others and singled out the importance of the coach. Limitations in app design were cited as a major reason for infrequent use of the app outside of coaching sessions. Results provide mixed evidence regarding the utility of web-based mobile apps, such as ProSolv to support problem solving following brain injury. Implications for Rehabilitation People with cognitive impairments following brain injury often struggle with problem solving in everyday contexts. Research supports problem solving skills training following brain injury. Assistive technology for cognition (smartphones, selected apps) offers a means of supporting problem solving for this population. This project demonstrated the feasibility of a web-based programme to address this need.

  4. Bypassing the Kohn-Sham equations with machine learning.

    PubMed

    Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert

    2017-10-11

    Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.

  5. Y-12 PLANT NUCLEAR SAFETY HANDBOOK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachter, J.W. ed.; Bailey, M.L.; Cagle, T.J.

    1963-03-27

    Information needed to solve nuclear safety problems is condensed into a reference book for use by persons familiar with the field. Included are a glossary of terms; useful tables; nuclear constants; criticality calculations; basic nuclear safety limits; solution geometries and critical values; metal critical values; criticality values for intermediate, heterogeneous, and interacting systems; miscellaneous and related information; and report number, author, and subject indexes. (C.H.)

  6. Two atoms in an anisotropic harmonic trap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idziaszek, Z.; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, 02-668 Warsaw; Calarco, T.

    2005-05-15

    We consider the system of two interacting atoms confined in axially symmetric harmonic trap. Within the pseudopotential approximation, we solve the Schroedinger equation exactly, discussing the limits of quasi-one-and quasi-two-dimensional geometries. Finally, we discuss the application of an energy-dependent pseudopotential, which allows us to extend the validity of our results to the case of tight traps and large scattering lengths.

  7. The HIGHLEAD program: locating and designing highlead harvest units by using digital terrain models.

    Treesearch

    R.H. Twito; S.E. Reutebuch; R.J. McGaughey

    1988-01-01

    PLANS, a software package for integrated timber-harvest planning, uses digital terrain models to provide the topographic data needed to fit harvest and transportation designs to specific terrain. HIGHLEAD, an integral program in the PLANS package, is used to design the timber-harvest units to be yarded by highlead systems. It solves for the yarding limits of direct...

  8. Solving Component Structural Dynamic Failures Due to Extremely High Frequency Structural Response on the Space Shuttle Program

    NASA Technical Reports Server (NTRS)

    Frady, Greg; Nesman, Thomas; Zoladz, Thomas; Szabo, Roland

    2010-01-01

    For many years, the capabilities to determine the root-cause failure of component failures have been limited to the analytical tools and the state of the art data acquisition systems. With this limited capability, many anomalies have been resolved by adding material to the design to increase robustness without the ability to determine if the design solution was satisfactory until after a series of expensive test programs were complete. The risk of failure and multiple design, test, and redesign cycles were high. During the Space Shuttle Program, many crack investigations in high energy density turbomachines, like the SSME turbopumps and high energy flows in the main propulsion system, have led to the discovery of numerous root-cause failures and anomalies due to the coexistences of acoustic forcing functions, structural natural modes, and a high energy excitation, such as an edge tone or shedding flow, leading the technical community to understand many of the primary contributors to extremely high frequency high cycle fatique fluid-structure interaction anomalies. These contributors have been identified using advanced analysis tools and verified using component and system tests during component ground tests, systems tests, and flight. The structural dynamics and fluid dynamics communities have developed a special sensitivity to the fluid-structure interaction problems and have been able to adjust and solve these problems in a time effective manner to meet budget and schedule deadlines of operational vehicle programs, such as the Space Shuttle Program over the years.

  9. A Distributed Algorithm for Economic Dispatch Over Time-Varying Directed Networks With Delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Lu, Jie; Wu, Di

    In power system operation, economic dispatch problem (EDP) is designed to minimize the total generation cost while meeting the demand and satisfying generator capacity limits. This paper proposes an algorithm based on the gradient-push method to solve the EDP in a distributed manner over communication networks potentially with time-varying topologies and communication delays. It has been shown that the proposed method is guaranteed to solve the EDP if the time-varying directed communication network is uniformly jointly strongly connected. Moreover, the proposed algorithm is also able to handle arbitrarily large but bounded time delays on communication links. Numerical simulations are usedmore » to illustrate and validate the proposed algorithm.« less

  10. An Intervention Framework Designed to Develop the Collaborative Problem-Solving Skills of Primary School Students

    ERIC Educational Resources Information Center

    Gu, Xiaoqing; Chen, Shan; Zhu, Wenbo; Lin, Lin

    2015-01-01

    Considerable effort has been invested in innovative learning practices such as collaborative inquiry. Collaborative problem solving is becoming popular in school settings, but there is limited knowledge on how to develop skills crucial in collaborative problem solving in students. Based on the intervention design in social interaction of…

  11. Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps

    DOE PAGES

    Isotalo, Aarno; Pusa, Maria

    2016-05-01

    The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less

  13. Numerical solution of the quantum Lenard-Balescu equation for a non-degenerate one-component plasma

    DOE PAGES

    Scullard, Christian R.; Belt, Andrew P.; Fennell, Susan C.; ...

    2016-09-01

    We present a numerical solution of the quantum Lenard-Balescu equation using a spectral method, namely an expansion in Laguerre polynomials. This method exactly conserves both particles and kinetic energy and facilitates the integration over the dielectric function. To demonstrate the method, we solve the equilibration problem for a spatially homogeneous one-component plasma with various initial conditions. Unlike the more usual Landau/Fokker-Planck system, this method requires no input Coulomb logarithm; the logarithmic terms in the collision integral arise naturally from the equation along with the non-logarithmic order-unity terms. The spectral method can also be used to solve the Landau equation andmore » a quantum version of the Landau equation in which the integration over the wavenumber requires only a lower cutoff. We solve these problems as well and compare them with the full Lenard-Balescu solution in the weak-coupling limit. Finally, we discuss the possible generalization of this method to include spatial inhomogeneity and velocity anisotropy.« less

  14. Error behavior of multistep methods applied to unstable differential systems

    NASA Technical Reports Server (NTRS)

    Brown, R. L.

    1977-01-01

    The problem of modeling a dynamic system described by a system of ordinary differential equations which has unstable components for limited periods of time is discussed. It is shown that the global error in a multistep numerical method is the solution to a difference equation initial value problem, and the approximate solution is given for several popular multistep integration formulas. Inspection of the solution leads to the formulation of four criteria for integrators appropriate to unstable problems. A sample problem is solved numerically using three popular formulas and two different stepsizes to illustrate the appropriateness of the criteria.

  15. The Convergence of Intelligences

    NASA Astrophysics Data System (ADS)

    Diederich, Joachim

    Minsky (1985) argued an extraterrestrial intelligence may be similar to ours despite very different origins. ``Problem- solving'' offers evolutionary advantages and individuals who are part of a technical civilisation should have this capacity. On earth, the principles of problem-solving are the same for humans, some primates and machines based on Artificial Intelligence (AI) techniques. Intelligent systems use ``goals'' and ``sub-goals'' for problem-solving, with memories and representations of ``objects'' and ``sub-objects'' as well as knowledge of relations such as ``cause'' or ``difference.'' Some of these objects are generic and cannot easily be divided into parts. We must, therefore, assume that these objects and relations are universal, and a general property of intelligence. Minsky's arguments from 1985 are extended here. The last decade has seen the development of a general learning theory (``computational learning theory'' (CLT) or ``statistical learning theory'') which equally applies to humans, animals and machines. It is argued that basic learning laws will also apply to an evolved alien intelligence, and this includes limitations of what can be learned efficiently. An example from CLT is that the general learning problem for neural networks is intractable, i.e. it cannot be solved efficiently for all instances (it is ``NP-complete''). It is the objective of this paper to show that evolved intelligences will be constrained by general learning laws and will use task-decomposition for problem-solving. Since learning and problem-solving are core features of intelligence, it can be said that intelligences converge despite very different origins.

  16. CFO compensation method using optical feedback path for coherent optical OFDM system

    NASA Astrophysics Data System (ADS)

    Moon, Sang-Rok; Hwang, In-Ki; Kang, Hun-Sik; Chang, Sun Hyok; Lee, Seung-Woo; Lee, Joon Ki

    2017-07-01

    We investigate feasibility of carrier frequency offset (CFO) compensation method using optical feedback path for coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. Recently proposed CFO compensation algorithms provide wide CFO estimation range in electrical domain. However, their practical compensation range is limited by sampling rate of an analog-to-digital converter (ADC). This limitation has not drawn attention, since the ADC sampling rate was high enough comparing to the data bandwidth and CFO in the wireless OFDM system. For CO-OFDM, the limitation is becoming visible because of increased data bandwidth, laser instability (i.e. large CFO) and insufficient ADC sampling rate owing to high cost. To solve the problem and extend practical CFO compensation range, we propose a CFO compensation method having optical feedback path. By adding simple wavelength control for local oscillator, the practical CFO compensation range can be extended to the sampling frequency range. The feasibility of the proposed method is experimentally investigated.

  17. Exact results in the large system size limit for the dynamics of the chemical master equation, a one dimensional chain of equations.

    PubMed

    Martirosyan, A; Saakian, David B

    2011-08-01

    We apply the Hamilton-Jacobi equation (HJE) formalism to solve the dynamics of the chemical master equation (CME). We found exact analytical expressions (in large system-size limit) for the probability distribution, including explicit expression for the dynamics of variance of distribution. We also give the solution for some simple cases of the model with time-dependent rates. We derived the results of the Van Kampen method from the HJE approach using a special ansatz. Using the Van Kampen method, we give a system of ordinary differential equations (ODEs) to define the variance in a two-dimensional case. We performed numerics for the CME with stationary noise. We give analytical criteria for the disappearance of bistability in the case of stationary noise in one-dimensional CMEs.

  18. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less

  19. Illicit Trafficking in the Western Hemisphere: Developing an Operational Approach to Defeat Smuggling within the Region

    DTIC Science & Technology

    2017-03-31

    and political stability. The threat is currently so pervasive that solving it is impossible without significant strategic reframing. A design ...approach will offer a better understanding of the functions and systems used for illicit trafficking. An operational design will be useful for developing a...illicit drugs, human trafficking, USSOUTHCOM, trafficking, operational design 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT

  20. Formalism for the solution of quadratic Hamiltonians with large cosine terms

    NASA Astrophysics Data System (ADS)

    Ganeshan, Sriram; Levin, Michael

    2016-02-01

    We consider quantum Hamiltonians of the form H =H0-U ∑jcos(Cj) , where H0 is a quadratic function of position and momentum variables {x1,p1,x2,p2,⋯} and the Cj's are linear in these variables. We allow H0 and Cj to be completely general with only two restrictions: we require that (1) the Cj's are linearly independent and (2) [Cj,Ck] is an integer multiple of 2 π i for all j ,k so that the different cosine terms commute with one another. Our main result is a recipe for solving these Hamiltonians and obtaining their exact low-energy spectrum in the limit U →∞ . This recipe involves constructing creation and annihilation operators and is similar in spirit to the procedure for diagonalizing quadratic Hamiltonians. In addition to our exact solution in the infinite U limit, we also discuss how to analyze these systems when U is large but finite. Our results are relevant to a number of different physical systems, but one of the most natural applications is to understanding the effects of electron scattering on quantum Hall edge modes. To demonstrate this application, we use our formalism to solve a toy model for a fractional quantum spin Hall edge with different types of impurities.

  1. A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Stamer, Torsten; Inutsuka, Shu-ichiro

    2018-06-01

    We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.

  2. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  3. Amoeba-inspired nanoarchitectonic computing: solving intractable computational problems using nanoscale photoexcitation transfer dynamics.

    PubMed

    Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko

    2013-06-18

    Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.

  4. High-resolution combined global gravity field modelling: Solving large kite systems using distributed computational algorithms

    NASA Astrophysics Data System (ADS)

    Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas

    2016-04-01

    One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are therefore suitable for the application on supercomputers (such as SuperMUC). Finally, (if time or space) some in-detail problems are shown that occur when dealing with high degree spherical harmonic base functions (mostly due to instabilities of Legendre polynomials), introducing also an appropriate solution for each.

  5. Excitations in the Yang–Gaudin Bose gas

    DOE PAGES

    Robinson, Neil J.; Konik, Robert M.

    2017-06-01

    Here, we study the excitation spectrum of two-component delta-function interacting bosons confined to a single spatial dimension, the Yang–Gaudin Bose gas. We show that there are pronounced finite-size effects in the dispersion relations of excitations, perhaps best illustrated by the spinon single particle dispersion which exhibits a gap at 2k F and a finite-momentum roton-like minimum. Such features occur at energies far above the finite volume excitation gap, vanish slowly as 1/L for fixed spinon number, and can persist to the thermodynamic limit at fixed spinon density. Features such as the 2k F gap also persist to multi-particle excitation continua. Our results show that excitations in the finite system can behave in a qualitatively different manner to analogous excitations in the thermodynamic limit. The Yang–Gaudin Bose gas is also host to multi-spinon bound states, known asmore » $$\\Lambda$$ -strings. We study these excitations both in the thermodynamic limit under the string hypothesis and in finite size systems where string deviations are taken into account. In the zero-temperature limit we present a simple relation between the length n $$\\Lambda$$-string dressed energies $$\\epsilon_n(\\lambda)$$ and the dressed energy $$\\epsilon(k)$$. We solve the Yang–Yang–Takahashi equations numerically and compare to the analytical solution obtained under the strong couple expansion, revealing that the length n $$\\Lambda$$ -string dressed energy is Lorentzian over a wide range of real string centers λ in the vicinity of $$\\lambda = 0$$ . We then examine the finite size effects present in the dispersion of the two-spinon bound states by numerically solving the Bethe ansatz equations with string deviations.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Neil J.; Konik, Robert M.

    Here, we study the excitation spectrum of two-component delta-function interacting bosons confined to a single spatial dimension, the Yang–Gaudin Bose gas. We show that there are pronounced finite-size effects in the dispersion relations of excitations, perhaps best illustrated by the spinon single particle dispersion which exhibits a gap at 2k F and a finite-momentum roton-like minimum. Such features occur at energies far above the finite volume excitation gap, vanish slowly as 1/L for fixed spinon number, and can persist to the thermodynamic limit at fixed spinon density. Features such as the 2k F gap also persist to multi-particle excitation continua. Our results show that excitations in the finite system can behave in a qualitatively different manner to analogous excitations in the thermodynamic limit. The Yang–Gaudin Bose gas is also host to multi-spinon bound states, known asmore » $$\\Lambda$$ -strings. We study these excitations both in the thermodynamic limit under the string hypothesis and in finite size systems where string deviations are taken into account. In the zero-temperature limit we present a simple relation between the length n $$\\Lambda$$-string dressed energies $$\\epsilon_n(\\lambda)$$ and the dressed energy $$\\epsilon(k)$$. We solve the Yang–Yang–Takahashi equations numerically and compare to the analytical solution obtained under the strong couple expansion, revealing that the length n $$\\Lambda$$ -string dressed energy is Lorentzian over a wide range of real string centers λ in the vicinity of $$\\lambda = 0$$ . We then examine the finite size effects present in the dispersion of the two-spinon bound states by numerically solving the Bethe ansatz equations with string deviations.« less

  7. Finding a roadmap to achieve large neuromorphic hardware systems

    PubMed Central

    Hasler, Jennifer; Marr, Bo

    2013-01-01

    Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time. PMID:24058330

  8. Some Applications of Algebraic System Solving

    ERIC Educational Resources Information Center

    Roanes-Lozano, Eugenio

    2011-01-01

    Technology and, in particular, computer algebra systems, allows us to change both the way we teach mathematics and the mathematical curriculum. Curiously enough, unlike what happens with linear system solving, algebraic system solving is not widely known. The aim of this paper is to show that, although the theory lying behind the "exact…

  9. Cross-layer Joint Relay Selection and Power Allocation Scheme for Cooperative Relaying System

    NASA Astrophysics Data System (ADS)

    Zhi, Hui; He, Mengmeng; Wang, Feiyue; Huang, Ziju

    2018-03-01

    A novel cross-layer joint relay selection and power allocation (CL-JRSPA) scheme over physical layer and data-link layer is proposed for cooperative relaying system in this paper. Our goal is finding the optimal relay selection and power allocation scheme to maximize system achievable rate when satisfying total transmit power constraint in physical layer and statistical delay quality-of-service (QoS) demand in data-link layer. Using the concept of effective capacity (EC), our goal can be formulated into an optimal joint relay selection and power allocation (JRSPA) problem to maximize the EC when satisfying total transmit power limitation. We first solving optimal power allocation (PA) problem with Lagrange multiplier approach, and then solving optimal relay selection (RS) problem. Simulation results demonstrate that CL-JRSPA scheme gets larger EC than other schemes when satisfying delay QoS demand. In addition, the proposed CL-JRSPA scheme achieves the maximal EC when relay located approximately halfway between source and destination, and EC becomes smaller when the QoS exponent becomes larger.

  10. Optimization of groundwater artificial recharge systems using a genetic algorithm: a case study in Beijing, China

    NASA Astrophysics Data System (ADS)

    Hao, Qichen; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Huang, Linxian

    2018-05-01

    An optimization approach is used for the operation of groundwater artificial recharge systems in an alluvial fan in Beijing, China. The optimization model incorporates a transient groundwater flow model, which allows for simulation of the groundwater response to artificial recharge. The facilities' operation with regard to recharge rates is formulated as a nonlinear programming problem to maximize the volume of surface water recharged into the aquifers under specific constraints. This optimization problem is solved by the parallel genetic algorithm (PGA) based on OpenMP, which could substantially reduce the computation time. To solve the PGA with constraints, the multiplicative penalty method is applied. In addition, the facilities' locations are implicitly determined on the basis of the results of the recharge-rate optimizations. Two scenarios are optimized and the optimal results indicate that the amount of water recharged into the aquifers will increase without exceeding the upper limits of the groundwater levels. Optimal operation of this artificial recharge system can also contribute to the more effective recovery of the groundwater storage capacity.

  11. Maneuver simulations of flexible spacecraft by solving TPBVP

    NASA Technical Reports Server (NTRS)

    Bainum, Peter M.; Li, Feiyue

    1991-01-01

    The optimal control of large angle rapid maneuvers and vibrations of a Shuttle mast reflector system is considered. The nonlinear equations of motion are formulated by using Lagrange's formula, with the mast modeled as a continuous beam. The nonlinear terms in the equations come from the coupling between the angular velocities, the modal coordinates, and the modal rates. Pontryagin's Maximum Principle is applied to the slewing problem, to derive the necessary conditions for the optimal controls, which are bounded by given saturation levels. The resulting two point boundary value problem (TPBVP) is then solved by using the quasilinearization algorithm and the method of particular solutions. In the numerical simulations, the structural parameters and the control limits from the Spacecraft Control Lab Experiment (SCOLE) are used. In the 2-D case, only the motion in the plane of an Earth orbit or the single axis slewing motion is discussed. In the 3-D slewing, the mast is modeled as a continuous beam subjected to 3-D deformations. The numerical results for both the linearized system and the nonlinear system are presented to compare the differences in their time response.

  12. Parallel Computations in Insect and Mammalian Visual Motion Processing

    PubMed Central

    Clark, Damon A.; Demb, Jonathan B.

    2016-01-01

    Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world. PMID:27780048

  13. Parallel Computations in Insect and Mammalian Visual Motion Processing.

    PubMed

    Clark, Damon A; Demb, Jonathan B

    2016-10-24

    Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-01

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still within safety limits; however, for 7.9542 g and 8.838 g (× 106 cm-1) the limits were exceeded.

  15. Second derivative time integration methods for discontinuous Galerkin solutions of unsteady compressible flows

    NASA Astrophysics Data System (ADS)

    Nigro, A.; De Bartolo, C.; Crivellini, A.; Bassi, F.

    2017-12-01

    In this paper we investigate the possibility of using the high-order accurate A (α) -stable Second Derivative (SD) schemes proposed by Enright for the implicit time integration of the Discontinuous Galerkin (DG) space-discretized Navier-Stokes equations. These multistep schemes are A-stable up to fourth-order, but their use results in a system matrix difficult to compute. Furthermore, the evaluation of the nonlinear function is computationally very demanding. We propose here a Matrix-Free (MF) implementation of Enright schemes that allows to obtain a method without the costs of forming, storing and factorizing the system matrix, which is much less computationally expensive than its matrix-explicit counterpart, and which performs competitively with other implicit schemes, such as the Modified Extended Backward Differentiation Formulae (MEBDF). The algorithm makes use of the preconditioned GMRES algorithm for solving the linear system of equations. The preconditioner is based on the ILU(0) factorization of an approximated but computationally cheaper form of the system matrix, and it has been reused for several time steps to improve the efficiency of the MF Newton-Krylov solver. We additionally employ a polynomial extrapolation technique to compute an accurate initial guess to the implicit nonlinear system. The stability properties of SD schemes have been analyzed by solving a linear model problem. For the analysis on the Navier-Stokes equations, two-dimensional inviscid and viscous test cases, both with a known analytical solution, are solved to assess the accuracy properties of the proposed time integration method for nonlinear autonomous and non-autonomous systems, respectively. The performance of the SD algorithm is compared with the ones obtained by using an MF-MEBDF solver, in order to evaluate its effectiveness, identifying its limitations and suggesting possible further improvements.

  16. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, W.; Almgren, A.; Bell, J.

    We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunovmore » scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Dezhi; Liu, Yixuan, E-mail: xuan61x@163.com; Guo, Zhanshe

    A new maglev sensor is proposed to measure ultra-low frequency (ULF) vibration, which uses hybrid-magnet levitation structure with electromagnets and permanent magnets as the supporting component, rather than the conventional spring structure of magnetoelectric vibration sensor. Since the lower measurement limit needs to be reduced, the equivalent bearing stiffness coefficient and the equivalent damping coefficient are adjusted by the sensitivity unit structure of the sensor and the closed-loop control system, which realizes both the closed-loop control and the solving algorithms. A simple sensor experimental platform is then assembled based on a digital hardware system, and experimental results demonstrate that themore » lower measurement limit of the sensor is increased to 0.2 Hz under these experimental conditions, indicating promising results of the maglev sensor for ULF vibration measurements.« less

  18. Theory and experiment research for ultra-low frequency maglev vibration sensor.

    PubMed

    Zheng, Dezhi; Liu, Yixuan; Guo, Zhanshe; Zhao, Xiaomeng; Fan, Shangchun

    2015-10-01

    A new maglev sensor is proposed to measure ultra-low frequency (ULF) vibration, which uses hybrid-magnet levitation structure with electromagnets and permanent magnets as the supporting component, rather than the conventional spring structure of magnetoelectric vibration sensor. Since the lower measurement limit needs to be reduced, the equivalent bearing stiffness coefficient and the equivalent damping coefficient are adjusted by the sensitivity unit structure of the sensor and the closed-loop control system, which realizes both the closed-loop control and the solving algorithms. A simple sensor experimental platform is then assembled based on a digital hardware system, and experimental results demonstrate that the lower measurement limit of the sensor is increased to 0.2 Hz under these experimental conditions, indicating promising results of the maglev sensor for ULF vibration measurements.

  19. Theory and experiment research for ultra-low frequency maglev vibration sensor

    NASA Astrophysics Data System (ADS)

    Zheng, Dezhi; Liu, Yixuan; Guo, Zhanshe; Zhao, Xiaomeng; Fan, Shangchun

    2015-10-01

    A new maglev sensor is proposed to measure ultra-low frequency (ULF) vibration, which uses hybrid-magnet levitation structure with electromagnets and permanent magnets as the supporting component, rather than the conventional spring structure of magnetoelectric vibration sensor. Since the lower measurement limit needs to be reduced, the equivalent bearing stiffness coefficient and the equivalent damping coefficient are adjusted by the sensitivity unit structure of the sensor and the closed-loop control system, which realizes both the closed-loop control and the solving algorithms. A simple sensor experimental platform is then assembled based on a digital hardware system, and experimental results demonstrate that the lower measurement limit of the sensor is increased to 0.2 Hz under these experimental conditions, indicating promising results of the maglev sensor for ULF vibration measurements.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  1. Design and implementation of intelligent electronic warfare decision making algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Hsin-Hsien; Chen, Chang-Kuo; Hsueh, Chi-Shun

    2017-05-01

    Electromagnetic signals and the requirements of timely response have been a rapid growth in modern electronic warfare. Although jammers are limited resources, it is possible to achieve the best electronic warfare efficiency by tactical decisions. This paper proposes the intelligent electronic warfare decision support system. In this work, we develop a novel hybrid algorithm, Digital Pheromone Particle Swarm Optimization, based on Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Shuffled Frog Leaping Algorithm (SFLA). We use PSO to solve the problem and combine the concept of pheromones in ACO to accumulate more useful information in spatial solving process and speed up finding the optimal solution. The proposed algorithm finds the optimal solution in reasonable computation time by using the method of matrix conversion in SFLA. The results indicated that jammer allocation was more effective. The system based on the hybrid algorithm provides electronic warfare commanders with critical information to assist commanders in effectively managing the complex electromagnetic battlefield.

  2. A randomized controlled trial of a Dutch version of systems training for emotional predictability and problem solving for borderline personality disorder.

    PubMed

    Bos, Elisabeth H; van Wel, E Bas; Appelo, Martin T; Verbraak, Marc J P M

    2010-04-01

    Systems Training for Emotional Predictability and Problem Solving (STEPPS) is a group treatment for persons with borderline personality disorder (BPD) that is relatively easy to implement. We investigated the efficacy of a Dutch version of this treatment (VERS). Seventy-nine DSM-IV BPD patients were randomly assigned to STEPPS plus an adjunctive individual therapy, or to treatment as usual. Assessments took place before and after the intervention, and at a 6-month follow-up. STEPPS recipients showed a significantly greater reduction in general psychiatric and BPD-specific symptomatology than subjects assigned to treatment as usual; these differences remained significant at follow-up. STEPPS also led to greater improvement in quality of life, especially at follow-up. No differences in impulsive or parasuicidal behavior were observed. Effect sizes for the differences between the treatments were moderate to large. The results suggest that the brief STEPPS program combined with limited individual therapy can improve BPD-treatment in a number of ways.

  3. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  4. Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting

    DOE PAGES

    Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart

    2015-02-14

    Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less

  5. A Symmetric Positive Definite Formulation for Monolithic Fluid Structure Interaction

    DTIC Science & Technology

    2010-08-09

    more likely to converge than simply iterating the partitioned approach to convergence in a simple Gauss - Seidel manner. Our approach allows the use of...conditions in a second step. These approaches can also be iterated within a given time step for increased stability, noting that in the limit if one... converges one obtains a monolithic (albeit expensive) approach. Other approaches construct strongly coupled systems and then solve them in one of several

  6. First flights of genetic-algorithm Kitty Hawk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, D.E.

    1994-12-31

    The design of complex systems requires an effective methodology of invention. This paper considers the methodology of the Wright brothers in inventing the powered airplane and suggests how successes in the design of genetic algorithms have come at the hands of a Wright-brothers-like approach. Recent reliable subquadratic results in solving hard problems with nontraditional GAs and predictions of the limits of simple GAs are presented as two accomplishments achieved in this manner.

  7. Mathematical modeling of processes of heat and mass transfer in channels of water evaporating coolers

    NASA Astrophysics Data System (ADS)

    Gulevsky, V. A.; Ryazantsev, A. A.; Nikulichev, A. A.; Menzhulova, A. S.

    2018-05-01

    The variety of cooling systems is dictated by a wide range of demands placed on them. This is the price, operating costs, quality of work, ecological safety, etc. These requirements in a positive sense are put into correspondence by water evaporating plate coolers. Currently, their widespread use is limited by a lack of theoretical base. To solve this problem, the best method is mathematical modeling.

  8. Ball bearing heat analysis program (BABHAP)

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The Ball Bearing Heat Analysis Program (BABHAP) is an attempt to assemble a series of equations, some of which are non-linear algebraic systems, in a logical order, which when solved, provide a complex analysis of load distribution among the balls, ball velocities, heat generation resulting from friction, applied load, and ball spinning, minimum lubricant film thickness, and many additional characteristics of ball bearing systems. Although initial design requirements for BABHAP were dictated by the core limitations of the PDP 11/45 computer, (approximately 8K of real words with limited number of instructions) the program dimensions can easily be expanded for large core computers such as the UNIVAC 1108. The PDP version of BABHAP is also operational on the UNIVAC system with the exception that the PDP uses 029 punch and the UNIVAC uses 026. A conversion program was written to allow transfer between machines.

  9. Present status of metrology of electro-optical surveillance systems

    NASA Astrophysics Data System (ADS)

    Chrzanowski, K.

    2017-10-01

    There has been a significant progress in equipment for testing electro-optical surveillance systems over the last decade. Modern test systems are increasingly computerized, employ advanced image processing and offer software support in measurement process. However, one great challenge, in form of relative low accuracy, still remains not solved. It is quite common that different test stations, when testing the same device, produce different results. It can even happen that two testing teams, while working on the same test station, with the same tested device, produce different results. Rapid growth of electro-optical technology, poor standardization, limited metrology infrastructure, subjective nature of some measurements, fundamental limitations from laws of physics, tendering rules and advances in artificial intelligence are major factors responsible for such situation. Regardless, next decade should bring significant improvements, since improvement in measurement accuracy is needed to sustain fast growth of electro-optical surveillance technology.

  10. A celestial assisted INS initialization method for lunar explorers.

    PubMed

    Ning, Xiaolin; Wang, Longhua; Wu, Weiren; Fang, Jiancheng

    2011-01-01

    The second and third phases of the Chinese Lunar Exploration Program (CLEP) are planning to achieve Moon landing, surface exploration and automated sample return. In these missions, the inertial navigation system (INS) and celestial navigation system (CNS) are two indispensable autonomous navigation systems which can compensate for limitations in the ground based navigation system. The accurate initialization of the INS and the precise calibration of the CNS are needed in order to achieve high navigation accuracy. Neither the INS nor the CNS can solve the above problems using the ground controllers or by themselves on the lunar surface. However, since they are complementary to each other, these problems can be solved by combining them together. A new celestial assisted INS initialization method is presented, in which the initial position and attitude of the explorer as well as the inertial sensors' biases are estimated by aiding the INS with celestial measurements. Furthermore, the systematic error of the CNS is also corrected by the help of INS measurements. Simulations show that the maximum error in position is 300 m and in attitude 40″, which demonstrates this method is a promising and attractive scheme for explorers on the lunar surface.

  11. A Celestial Assisted INS Initialization Method for Lunar Explorers

    PubMed Central

    Ning, Xiaolin; Wang, Longhua; Wu, Weiren; Fang, Jiancheng

    2011-01-01

    The second and third phases of the Chinese Lunar Exploration Program (CLEP) are planning to achieve Moon landing, surface exploration and automated sample return. In these missions, the inertial navigation system (INS) and celestial navigation system (CNS) are two indispensable autonomous navigation systems which can compensate for limitations in the ground based navigation system. The accurate initialization of the INS and the precise calibration of the CNS are needed in order to achieve high navigation accuracy. Neither the INS nor the CNS can solve the above problems using the ground controllers or by themselves on the lunar surface. However, since they are complementary to each other, these problems can be solved by combining them together. A new celestial assisted INS initialization method is presented, in which the initial position and attitude of the explorer as well as the inertial sensors’ biases are estimated by aiding the INS with celestial measurements. Furthermore, the systematic error of the CNS is also corrected by the help of INS measurements. Simulations show that the maximum error in position is 300 m and in attitude 40″, which demonstrates this method is a promising and attractive scheme for explorers on the lunar surface. PMID:22163998

  12. A Lane-Level LBS System for Vehicle Network with High-Precision BDS/GPS Positioning

    PubMed Central

    Guo, Chi; Guo, Wenfei; Cao, Guangyi; Dong, Hongbo

    2015-01-01

    In recent years, research on vehicle network location service has begun to focus on its intelligence and precision. The accuracy of space-time information has become a core factor for vehicle network systems in a mobile environment. However, difficulties persist in vehicle satellite positioning since deficiencies in the provision of high-quality space-time references greatly limit the development and application of vehicle networks. In this paper, we propose a high-precision-based vehicle network location service to solve this problem. The major components of this study include the following: (1) application of wide-area precise positioning technology to the vehicle network system. An adaptive correction message broadcast protocol is designed to satisfy the requirements for large-scale target precise positioning in the mobile Internet environment; (2) development of a concurrence service system with a flexible virtual expansion architecture to guarantee reliable data interaction between vehicles and the background; (3) verification of the positioning precision and service quality in the urban environment. Based on this high-precision positioning service platform, a lane-level location service is designed to solve a typical traffic safety problem. PMID:25755665

  13. The mathematical model of dynamic stabilization system for autonomous car

    NASA Astrophysics Data System (ADS)

    Saikin, A. M.; Buznikov, S. E.; Shabanov, N. S.; Elkin, D. S.

    2018-02-01

    Leading foreign companies and domestic enterprises carry out extensive researches and developments in the field of control systems for autonomous cars and in the field of improving driver assistance systems. The search for technical solutions, as a rule, is based on heuristic methods and does not always lead to satisfactory results. The purpose of this research is to formalize the road safety problem in the terms of modern control theory, to construct the adequate mathematical model for solving it, including the choice of software and hardware environment. For automatic control of the object, it is necessary to solve the problem of dynamic stabilization in the most complete formulation. The solution quality of the problem on a finite time interval is estimated by the value of the quadratic functional. Car speed, turn angle and additional yaw rate (during car drift or skidding) measurements are performed programmatically by the original virtual sensors. The limit speeds at which drift, skidding or rollover begins are calculated programmatically taking into account the friction coefficient identified in motion. The analysis of the results confirms both the adequacy of the mathematical models and the algorithms and the possibility of implementing the system in the minimal technical configuration.

  14. Exploring the limits of learning: Segregation of information integration and response selection is required for learning a serial reversal task

    PubMed Central

    Zanutto, B. Silvano

    2017-01-01

    Animals are proposed to learn the latent rules governing their environment in order to maximize their chances of survival. However, rules may change without notice, forcing animals to keep a memory of which one is currently at work. Rule switching can lead to situations in which the same stimulus/response pairing is positively and negatively rewarded in the long run, depending on variables that are not accessible to the animal. This fact raises questions on how neural systems are capable of reinforcement learning in environments where the reinforcement is inconsistent. Here we address this issue by asking about which aspects of connectivity, neural excitability and synaptic plasticity are key for a very general, stochastic spiking neural network model to solve a task in which rules change without being cued, taking the serial reversal task (SRT) as paradigm. Contrary to what could be expected, we found strong limitations for biologically plausible networks to solve the SRT. Especially, we proved that no network of neurons can learn a SRT if it is a single neural population that integrates stimuli information and at the same time is responsible of choosing the behavioural response. This limitation is independent of the number of neurons, neuronal dynamics or plasticity rules, and arises from the fact that plasticity is locally computed at each synapse, and that synaptic changes and neuronal activity are mutually dependent processes. We propose and characterize a spiking neural network model that solves the SRT, which relies on separating the functions of stimuli integration and response selection. The model suggests that experimental efforts to understand neural function should focus on the characterization of neural circuits according to their connectivity, neural dynamics, and the degree of modulation of synaptic plasticity with reward. PMID:29077735

  15. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  16. New Ideas on the Design of the Web-Based Learning System Oriented to Problem Solving from the Perspective of Question Chain and Learning Community

    ERIC Educational Resources Information Center

    Zhang, Yin; Chu, Samuel K. W.

    2016-01-01

    In recent years, a number of models concerning problem solving systems have been put forward. However, many of them stress on technology and neglect the research of problem solving itself, especially the learning mechanism related to problem solving. In this paper, we analyze the learning mechanism of problem solving, and propose that when…

  17. Nonlinear low-frequency electrostatic wave dynamics in a two-dimensional quantum plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Samiran, E-mail: sran_g@yahoo.com; Chakrabarti, Nikhil, E-mail: nikhil.chakrabarti@saha.ac.in

    2016-08-15

    The problem of two-dimensional arbitrary amplitude low-frequency electrostatic oscillation in a quasi-neutral quantum plasma is solved exactly by elementary means. In such quantum plasmas we have treated electrons quantum mechanically and ions classically. The exact analytical solution of the nonlinear system exhibits the formation of dark and black solitons. Numerical simulation also predicts the possible periodic solution of the nonlinear system. Nonlinear analysis reveals that the system does have a bifurcation at a critical Mach number that depends on the angle of propagation of the wave. The small-amplitude limit leads to the formation of weakly nonlinear Kadomstev–Petviashvili solitons.

  18. A Portable Computer System for Auditing Quality of Ambulatory Care

    PubMed Central

    McCoy, J. Michael; Dunn, Earl V.; Borgiel, Alexander E.

    1987-01-01

    Prior efforts to effectively and efficiently audit quality of ambulatory care based on comprehensive process criteria have been limited largely by the complexity and cost of data abstraction and management. Over the years, several demonstration projects have generated large sets of process criteria and mapping systems for evaluating quality of care, but these paper-based approaches have been impractical to implement on a routine basis. Recognizing that portable microcomputers could solve many of the technical problems in abstracting data from medical records, we built upon previously described criteria and developed a microcomputer-based abstracting system that facilitates reliable and cost-effective data abstraction.

  19. [Effect of implementation of essential medicine system in the primary health care institution in China].

    PubMed

    Huang, Donghong; Ren, Xiaohua; Hu, Jingxuan; Shi, Jingcheng; Xia, Da; Sun, Zhenqiu

    2015-02-01

    Our primary health care institution began to implement national essential medicine system in 2009. In past fi ve years, the goal of national essential medicine system has been initially achieved. For examples, medicine price is steadily reducing, the quality of medical services is improving and residents' satisfaction is substantial increasing every year. However, at the same time, we also found some urgent problems needed to be solved. For examples, the range of national essential medicine is limited, which is difficult to guarantee the quality of essential medication. In addition, how to compensate the primary health care institution is still a question.

  20. Image reconstruction of dynamic infrared single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin

    2018-03-01

    Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.

  1. Resource allocation for multichannel broadcasting visible light communication

    NASA Astrophysics Data System (ADS)

    Le, Nam-Tuan; Jang, Yeong Min

    2015-11-01

    Visible light communication (VLC), which offers the possibility of using light sources for both illumination and data communications simultaneously, will be a promising incorporation technique with lighting applications. However, it still remains some challenges especially coverage because of field-of-view limitation. In this paper, we focus on this issue by suggesting a resource allocation scheme for VLC broadcasting system. By using frame synchronization and a network calculus QoS approximation, as well as diversity technology, the proposed VLC architecture and QoS resource allocation for the multichannel-broadcasting MAC (medium access control) protocol can solve the coverage limitation problem and the link switching problem of exhibition service.

  2. A Decision Support System for Solving Multiple Criteria Optimization Problems

    ERIC Educational Resources Information Center

    Filatovas, Ernestas; Kurasova, Olga

    2011-01-01

    In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…

  3. Technique for Solving Electrically Small to Large Structures for Broadband Applications

    NASA Technical Reports Server (NTRS)

    Jandhyala, Vikram; Chowdhury, Indranil

    2011-01-01

    Fast iterative algorithms are often used for solving Method of Moments (MoM) systems, having a large number of unknowns, to determine current distribution and other parameters. The most commonly used fast methods include the fast multipole method (FMM), the precorrected fast Fourier transform (PFFT), and low-rank QR compression methods. These methods reduce the O(N) memory and time requirements to O(N log N) by compressing the dense MoM system so as to exploit the physics of Green s Function interactions. FFT-based techniques for solving such problems are efficient for spacefilling and uniform structures, but their performance substantially degrades for non-uniformly distributed structures due to the inherent need to employ a uniform global grid. FMM or QR techniques are better suited than FFT techniques; however, neither the FMM nor the QR technique can be used at all frequencies. This method has been developed to efficiently solve for a desired parameter of a system or device that can include both electrically large FMM elements, and electrically small QR elements. The system or device is set up as an oct-tree structure that can include regions of both the FMM type and the QR type. The system is enclosed with a cube at a 0- th level, splitting the cube at the 0-th level into eight child cubes. This forms cubes at a 1st level, recursively repeating the splitting process for cubes at successive levels until a desired number of levels is created. For each cube that is thus formed, neighbor lists and interaction lists are maintained. An iterative solver is then used to determine a first matrix vector product for any electrically large elements as well as a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large and small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within the predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter. The solution for the desired parameter is then presented to a user in a tangible form; for example, on a display.

  4. Phase portraits analysis of a barothropic system: The initial value problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuetche, Victor Kamgang, E-mail: vkuetche@yahoo.fr; Department of Physics, Faculty of Science, University of Yaounde I, P.O. Box 812, Yaounde; The Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste

    2014-05-15

    In this paper, we investigate the phase portraits features of a barothropic relaxing medium under pressure perturbations. In the starting point, we show within a third-order of accuracy that the previous system is modeled by a “dissipative” cubic nonlinear evolution equation. Paying particular attention to high-frequency perturbations of the system, we solve the initial value problem of the system both analytically and numerically while unveiling the existence of localized multivalued waveguide channels. Accordingly, we find that the “dissipative” term with a “dissipative” parameter less than some limit value does not destroy the ambiguous solutions. We address some physical implications ofmore » the results obtained previously.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelic, Andjelka; Zagonel, Aldo A.

    A system dynamics model was developed in response to the apparent decline in STEM candidates in the United States and a pending shortage. The model explores the attractiveness of STEM and STEM careers focusing on employers and the workforce. Policies such as boosting STEM literacy, lifting the H-1B visa cap, limiting the offshoring of jobs, and maintaining training are explored as possible solutions. The system is complex, with many feedbacks and long time delays, so solutions that focus on a single point of the system are not effective and cannot solve the problem. A deeper understanding of parts of themore » system that have not been explored to date is necessary to find a workable solution.« less

  6. Symbiotic intelligence: Self-organizing knowledge on distributed networks, driven by human interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, N.; Joslyn, C.; Rocha, L.

    1998-07-01

    This work addresses how human societies, and other diverse and distributed systems, solve collective challenges that are not approachable from the level of the individual, and how the Internet will change the way societies and organizations view problem solving. The authors apply the ideas developed in self-organizing systems to understand self-organization in informational systems. The simplest explanation as to why animals (for example, ants, wolves, and humans) are organized into societies is that these societies enhance the survival of the individuals which make up the populations. Individuals contribute to, as well as adapt to, these societies because they make lifemore » easier in one way or another, even though they may not always understand the process, either individually or collectively. Despite the lack of understanding of the how of the process, society during its existence as a species has changed significantly, from separate, small hunting tribes to a highly technological, globally integrated society. The authors combine this understanding of societal dynamics with self-organization on the Internet (the Net). The unique capability of the Net is that it combines, in a common medium, the entire human-technological system in both breadth and depth: breadth in the integration of heterogeneous systems of machines, information and people; and depth in the detailed capturing of the entire complexity of human use and creation of information. When the full diversity of societal dynamics is combined with the accuracy of communication on the Net, a phase transition is argued to occur in problem solving capability. Through conceptual examples, an experiment of collective decision making on the Net and a simulation showing the effect of noise and loss on collective decision making, the authors argue that the resulting symbiotic structure of humans and the Net will evolve as an alternative problem solving approach for groups, organizations and society. Self-organizing knowledge formation from this symbiotic intelligence exemplifies a new type of self-organizing system, one without dissipation and not constrained by limited resources.« less

  7. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    PubMed Central

    Saberi Nik, Hassan; Rebelo, Paulo

    2014-01-01

    We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM) is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results. PMID:25386624

  8. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  9. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  10. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  11. Poisson-Boltzmann-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-01

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external voltages. Extensive numerical experiments show that there is an excellent consistency between the results predicted from the present PBNP model and those obtained from the PNP model in terms of the electrostatic potentials, ion concentration profiles, and current-voltage (I-V) curves. The present PBNP model is further validated by a comparison with experimental measurements of I-V curves under various ion bulk concentrations. Numerical experiments indicate that the proposed PBNP model is more efficient than the original PNP model in terms of simulation time.

  12. Poisson-Boltzmann-Nernst-Planck model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Qiong; Wei Guowei; Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan 48824

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species inmore » the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external voltages. Extensive numerical experiments show that there is an excellent consistency between the results predicted from the present PBNP model and those obtained from the PNP model in terms of the electrostatic potentials, ion concentration profiles, and current-voltage (I-V) curves. The present PBNP model is further validated by a comparison with experimental measurements of I-V curves under various ion bulk concentrations. Numerical experiments indicate that the proposed PBNP model is more efficient than the original PNP model in terms of simulation time.« less

  13. Poisson–Boltzmann–Nernst–Planck model

    PubMed Central

    Zheng, Qiong; Wei, Guo-Wei

    2011-01-01

    The Poisson–Nernst–Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst–Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst–Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst–Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson–Boltzmann and Nernst–Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external voltages. Extensive numerical experiments show that there is an excellent consistency between the results predicted from the present PBNP model and those obtained from the PNP model in terms of the electrostatic potentials, ion concentration profiles, and current–voltage (I–V) curves. The present PBNP model is further validated by a comparison with experimental measurements of I–V curves under various ion bulk concentrations. Numerical experiments indicate that the proposed PBNP model is more efficient than the original PNP model in terms of simulation time. PMID:21599038

  14. Poisson-Boltzmann-Nernst-Planck model.

    PubMed

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external voltages. Extensive numerical experiments show that there is an excellent consistency between the results predicted from the present PBNP model and those obtained from the PNP model in terms of the electrostatic potentials, ion concentration profiles, and current-voltage (I-V) curves. The present PBNP model is further validated by a comparison with experimental measurements of I-V curves under various ion bulk concentrations. Numerical experiments indicate that the proposed PBNP model is more efficient than the original PNP model in terms of simulation time. © 2011 American Institute of Physics.

  15. Microgrids for Service Restoration to Critical Load in a Resilient Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yin; Liu, Chen-Ching; Schneider, Kevin P.

    icrogrids can act as emergency sources to serve critical loads when utility power is unavailable. This paper proposes a resiliency-based methodology that uses microgrids to restore critical loads on distribution feeders after a major disaster. Due to limited capacity of distributed generators (DGs) within microgrids, dynamic performance of the DGs during the restoration process becomes essential. In this paper, the stability of microgrids, limits on frequency deviation, and limits on transient voltage and current of DGs are incorporated as constraints of the critical load restoration problem. The limits on the amount of generation resources within microgrids are also considered. Bymore » introducing the concepts of restoration tree and load group, restoration of critical loads is transformed into a maximum coverage problem, which is a linear integer program (LIP). The restoration paths and actions are determined for critical loads by solving the LIP. A 4-feeder, 1069-bus unbalanced test system with four microgrids is utilized to demonstrate the effectiveness of the proposed method. The method is applied to the distribution system in Pullman, WA, resulting in a strategy that uses generators on the Washington State University campus to restore service to the Hospital and City Hall in Pullman.« less

  16. A Comparison of Analytical and Numerical Methods for Modeling Dissolution and Other Reactions in Transport Limited Systems

    NASA Astrophysics Data System (ADS)

    Hochstetler, D. L.; Kitanidis, P. K.

    2009-12-01

    Modeling the transport of reactive species is a computationally demanding problem, especially in complex subsurface media, where it is crucial to improve understanding of geochemical processes and the fate of groundwater contaminants. In most of these systems, reactions are inherently fast and actual rates of transformations are limited by the slower physical transport mechanisms. There have been efforts to reformulate multi-component reactive transport problems into systems that are simpler and less demanding to solve. These reformulations include defining conservative species and decoupling of reactive transport equations so that fewer of them must be solved, leaving mostly conservative equations for transport [e.g., De Simoni et al., 2005; De Simoni et al., 2007; Kräutle and Knabner, 2007; Molins et al., 2004]. Complex and computationally cumbersome numerical codes used to solve such problems have also caused De Simoni et al. [2005] to develop more manageable analytical solutions. Furthermore, this work evaluates reaction rates and has reaffirmed that the mixing rate,▽TuD▽u, where u is a solute concentration and D is the dispersion tensor, as defined by Kitanidis [1994], is an important and sometimes dominant factor in determining reaction rates. Thus, mixing of solutions is often reaction-limiting. We will present results from analytical and computational modeling of multi-component reactive-transport problems. The results have applications to dissolution of solid boundaries (e.g., calcite), dissolution of non-aqueous phase liquids (NAPLs) in separate phases, and mixing of saltwater and freshwater (e.g. saltwater intrusion in coastal carbonate aquifers). We quantify reaction rates, compare numerical and analytical results, and analyze under what circumstances which approach is most effective for a given problem. References: DeSimoni, M., et al. (2005), A procedure for the solution of multicomponent reactive transport problems, Water Resources Research, 41(W11410). DeSimoni, M., et al. (2007), A mixing ratios-based formulation for multicomponent reactive transport, Water Resources Research, 43(W07419). Kitanidis, P. (1994), The Concept of the Dilution Index, Water Resources Research, 30(7), 2011-2026. Kräutle, S., and P. Knabner (2007), A reduction scheme for coupled multicomponent transport-reaction problems in porous media: Generalization to problems with heterogeneous equilibrium reactions Water Resources Research, 43. Molins, S., et al. (2004), A formulation for decoupling components in reactive transport porblems, Water Resources Research, 40, 13.

  17. A New Runge-Kutta Discontinuous Galerkin Method with Conservation Constraint to Improve CFL Condition for Solving Conservation Laws

    PubMed Central

    Xu, Zhiliang; Chen, Xu-Yan; Liu, Yingjie

    2014-01-01

    We present a new formulation of the Runge-Kutta discontinuous Galerkin (RKDG) method [9, 8, 7, 6] for solving conservation Laws with increased CFL numbers. The new formulation requires the computed RKDG solution in a cell to satisfy additional conservation constraint in adjacent cells and does not increase the complexity or change the compactness of the RKDG method. Numerical computations for solving one-dimensional and two-dimensional scalar and systems of nonlinear hyperbolic conservation laws are performed with approximate solutions represented by piecewise quadratic and cubic polynomials, respectively. The hierarchical reconstruction [17, 33] is applied as a limiter to eliminate spurious oscillations in discontinuous solutions. From both numerical experiments and the analytic estimate of the CFL number of the newly formulated method, we find that: 1) this new formulation improves the CFL number over the original RKDG formulation by at least three times or more and thus reduces the overall computational cost; and 2) the new formulation essentially does not compromise the resolution of the numerical solutions of shock wave problems compared with ones computed by the RKDG method. PMID:25414520

  18. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  19. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  20. Modelling the performance of the tapered artery heat pipe design for use in the radiator of the solar dynamic power system of the NASA Space Station

    NASA Technical Reports Server (NTRS)

    Evans, Austin Lewis

    1988-01-01

    The paper presents a computer program developed to model the steady-state performance of the tapered artery heat pipe for use in the radiator of the solar dynamic power system of the NASA Space Station. The program solves six governing equations to ascertain which one is limiting the maximum heat transfer rate of the heat pipe. The present model appeared to be slightly better than the LTV model in matching the 1-g data for the standard 15-ft test heat pipe.

  1. Expansion and improvements of the FORMA system for response and load analysis. Volume 1: Programming manual

    NASA Technical Reports Server (NTRS)

    Wohlen, R. L.

    1976-01-01

    Techniques are presented for the solution of structural dynamic systems on an electronic digital computer using FORMA (FORTRAN Matrix Analysis). FORMA is a library of subroutines coded in FORTRAN 4 for the efficient solution of structural dynamics problems. These subroutines are in the form of building blocks that can be put together to solve a large variety of structural dynamics problems. The obvious advantage of the building block approach is that programming and checkout time are limited to that required for putting the blocks together in the proper order.

  2. A free energy satisfying discontinuous Galerkin method for one-dimensional Poisson-Nernst-Planck systems

    NASA Astrophysics Data System (ADS)

    Liu, Hailiang; Wang, Zhongming

    2017-01-01

    We design an arbitrary-order free energy satisfying discontinuous Galerkin (DG) method for solving time-dependent Poisson-Nernst-Planck systems. Both the semi-discrete and fully discrete DG methods are shown to satisfy the corresponding discrete free energy dissipation law for positive numerical solutions. Positivity of numerical solutions is enforced by an accuracy-preserving limiter in reference to positive cell averages. Numerical examples are presented to demonstrate the high resolution of the numerical algorithm and to illustrate the proven properties of mass conservation, free energy dissipation, as well as the preservation of steady states.

  3. System analysis of vehicle active safety problem

    NASA Astrophysics Data System (ADS)

    Buznikov, S. E.

    2018-02-01

    The problem of the road transport safety affects the vital interests of the most of the population and is characterized by a global level of significance. The system analysis of problem of creation of competitive active vehicle safety systems is presented as an interrelated complex of tasks of multi-criterion optimization and dynamic stabilization of the state variables of a controlled object. Solving them requires generation of all possible variants of technical solutions within the software and hardware domains and synthesis of the control, which is close to optimum. For implementing the task of the system analysis the Zwicky “morphological box” method is used. Creation of comprehensive active safety systems involves solution of the problem of preventing typical collisions. For solving it, a structured set of collisions is introduced with its elements being generated also using the Zwicky “morphological box” method. The obstacle speed, the longitudinal acceleration of the controlled object and the unpredictable changes in its movement direction due to certain faults, the road surface condition and the control errors are taken as structure variables that characterize the conditions of collisions. The conditions for preventing typical collisions are presented as inequalities for physical variables that define the state vector of the object and its dynamic limits.

  4. Wide-Area Situational Awareness of Power Grids with Limited Phasor Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Nieplocha, Jarek

    Lack of situational awareness has been identified as one of root causes for the August 14, 2003 Northeast Blackout in North America. To improve situational awareness, the Department of Energy (DOE) launched several projects to deploy Wide Area Measurement Systems (WAMS) in different interconnections. Compared to the tens of thousands of buses, the number of Phasor Measurement Units (PMUs) is quite limited and not enough to achieve the observability for the whole interconnections. To utilize the limited number of PMU measurements to improve situational awareness, this paper proposes to combine PMU measurement data and power flow equations to form amore » hybrid power flow model. Technically, a model which combines the concept of observable islands and modeling of power flow conditions, is proposed. The model is called a Hybrid Power Flow Model as it has both PMU measurements and simulation assumptions, which describes prior knowledge available about whole power systems. By solving the hybrid power flow equations, the proposed method can be used to derive power system states to improve the situational awareness of a power grid.« less

  5. Revisiting competition in a classic model system using formal links between theory and data.

    PubMed

    Hart, Simon P; Burgin, Jacqueline R; Marshall, Dustin J

    2012-09-01

    Formal links between theory and data are a critical goal for ecology. However, while our current understanding of competition provides the foundation for solving many derived ecological problems, this understanding is fractured because competition theory and data are rarely unified. Conclusions from seminal studies in space-limited benthic marine systems, in particular, have been very influential for our general understanding of competition, but rely on traditional empirical methods with limited inferential power and compatibility with theory. Here we explicitly link mathematical theory with experimental field data to provide a more sophisticated understanding of competition in this classic model system. In contrast to predictions from conceptual models, our estimates of competition coefficients show that a dominant space competitor can be equally affected by interspecific competition with a poor competitor (traditionally defined) as it is by intraspecific competition. More generally, the often-invoked competitive hierarchies and intransitivities in this system might be usefully revisited using more sophisticated empirical and analytical approaches.

  6. Information systems in healthcare - state and steps towards sustainability.

    PubMed

    Lenz, R

    2009-01-01

    To identify core challenges and first steps on the way to sustainable information systems in healthcare. Recent articles on healthcare information technology and related articles from Medical Informatics and Computer Science were reviewed and analyzed. Core challenges that couldn't be solved over the years are identified. The two core problem areas are process integration, meaning to effectively embed IT-systems into routine workflows, and systems integration, meaning to reduce the effort for interconnecting independently developed IT-components. Standards for systems integration have improved a lot, but their usefulness is limited where system evolution is needed. Sustainable Healthcare Information Systems should be based on system architectures that support system evolution and avoid costly system replacements every five to ten years. Some basic principles for the design of such systems are separation of concerns, loose coupling, deferred systems design, and service oriented architectures.

  7. Large-N -approximated field theory for multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Parisi, G.; Pascazio, S.; Scardicchio, A.

    2015-12-01

    We try to characterize the statistics of multipartite entanglement of the random states of an n -qubit system. Unable to solve the problem exactly we generalize it, replacing complex numbers with real vectors with Nc components (the original problem is recovered for Nc=2 ). Studying the leading diagrams in the large-Nc approximation, we unearth the presence of a phase transition and, in an explicit example, show that the so-called entanglement frustration disappears in the large-Nc limit.

  8. Indicators of Arctic Sea Ice Bistability in Climate Model Simulations and Observations

    DTIC Science & Technology

    2014-09-30

    ultimately developed a novel mathematical method to solve the system of equations involving the addition of a numerical “ ghost ” layer, as described in the...balance models ( EBMs ) and (ii) seasonally-varying single-column models (SCMs). As described in Approach item #1, we developed an idealized model that...includes both latitudinal and seasonal variations (Fig. 1). The model reduces to a standard EBM or SCM as limiting cases in the parameter space, thus

  9. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  10. Parent-Teacher Communication about Children with Autism Spectrum Disorder: An Examination of Collaborative Problem-Solving

    PubMed Central

    Azad, Gazi F.; Kim, Mina; Marcus, Steven C.; Mandell, David S.; Sheridan, Susan M.

    2016-01-01

    Effective parent-teacher communication involves problem-solving concerns about students. Few studies have examined problem solving interactions between parents and teachers of children with autism spectrum disorder (ASD), with a particular focus on identifying communication barriers and strategies for improving them. This study examined the problem-solving behaviors of parents and teachers of children with ASD. Participants included 18 teachers and 39 parents of children with ASD. Parent-teacher dyads were prompted to discuss and provide a solution for a problem that a student experienced at home and at school. Parents and teachers also reported on their problem-solving behaviors. Results showed that parents and teachers displayed limited use of the core elements of problem-solving. Teachers displayed more problem-solving behaviors than parents. Both groups reported engaging in more problem-solving behaviors than they were observed to display during their discussions. Our findings suggest that teacher and parent training programs should include collaborative approaches to problem-solving. PMID:28392604

  11. Parent-Teacher Communication about Children with Autism Spectrum Disorder: An Examination of Collaborative Problem-Solving.

    PubMed

    Azad, Gazi F; Kim, Mina; Marcus, Steven C; Mandell, David S; Sheridan, Susan M

    2016-12-01

    Effective parent-teacher communication involves problem-solving concerns about students. Few studies have examined problem solving interactions between parents and teachers of children with autism spectrum disorder (ASD), with a particular focus on identifying communication barriers and strategies for improving them. This study examined the problem-solving behaviors of parents and teachers of children with ASD. Participants included 18 teachers and 39 parents of children with ASD. Parent-teacher dyads were prompted to discuss and provide a solution for a problem that a student experienced at home and at school. Parents and teachers also reported on their problem-solving behaviors. Results showed that parents and teachers displayed limited use of the core elements of problem-solving. Teachers displayed more problem-solving behaviors than parents. Both groups reported engaging in more problem-solving behaviors than they were observed to display during their discussions. Our findings suggest that teacher and parent training programs should include collaborative approaches to problem-solving.

  12. The semantic system is involved in mathematical problem solving.

    PubMed

    Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng

    2018-02-01

    Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Extended linear detection range for optical tweezers using image-plane detection scheme

    NASA Astrophysics Data System (ADS)

    Hajizadeh, Faegheh; Masoumeh Mousavi, S.; Khaksar, Zeinab S.; Reihani, S. Nader S.

    2014-10-01

    Ability to measure pico- and femto-Newton range forces using optical tweezers (OT) strongly relies on the sensitivity of its detection system. We show that the commonly used back-focal-plane detection method provides a linear response range which is shorter than that of the restoring force of OT for large beads. This limits measurable force range of OT. We show, both theoretically and experimentally, that utilizing a second laser beam for tracking could solve the problem. We also propose a new detection scheme in which the quadrant photodiode is positioned at the plane optically conjugate to the object plane (image plane). This method solves the problem without need for a second laser beam for the bead sizes that are commonly used in force spectroscopy applications of OT, such as biopolymer stretching.

  14. On Solving Linear Recurrences

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2013-01-01

    A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.

  15. An updated Lagrangian discontinuous Galerkin hydrodynamic method for gas dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Tong; Shashkov, Mikhail Jurievich; Morgan, Nathaniel Ray

    Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for gas dynamics. The new method evolves conserved unknowns in the current configuration, which obviates the Jacobi matrix that maps the element in a reference coordinate system or the initial coordinate system to the current configuration. The density, momentum, and total energy (ρ, ρu, E) are approximated with conservative higher-order Taylor expansions over the element and are limited toward a piecewise constant field near discontinuities using a limiter. Two new limiting methods are presented for enforcing the bounds on the primitive variables of density, velocity, and specific internal energymore » (ρ, u, e). The nodal velocity, and the corresponding forces, are calculated by solving an approximate Riemann problem at the element nodes. An explicit second-order method is used to temporally advance the solution. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. 1D Cartesian coordinates test problem results are presented to demonstrate the accuracy and convergence order of the new DG method with the new limiters.« less

  16. An updated Lagrangian discontinuous Galerkin hydrodynamic method for gas dynamics

    DOE PAGES

    Wu, Tong; Shashkov, Mikhail Jurievich; Morgan, Nathaniel Ray; ...

    2018-04-09

    Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for gas dynamics. The new method evolves conserved unknowns in the current configuration, which obviates the Jacobi matrix that maps the element in a reference coordinate system or the initial coordinate system to the current configuration. The density, momentum, and total energy (ρ, ρu, E) are approximated with conservative higher-order Taylor expansions over the element and are limited toward a piecewise constant field near discontinuities using a limiter. Two new limiting methods are presented for enforcing the bounds on the primitive variables of density, velocity, and specific internal energymore » (ρ, u, e). The nodal velocity, and the corresponding forces, are calculated by solving an approximate Riemann problem at the element nodes. An explicit second-order method is used to temporally advance the solution. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. 1D Cartesian coordinates test problem results are presented to demonstrate the accuracy and convergence order of the new DG method with the new limiters.« less

  17. Reduze - Feynman integral reduction in C++

    NASA Astrophysics Data System (ADS)

    Studerus, C.

    2010-07-01

    Reduze is a computer program for reducing Feynman integrals to master integrals employing a Laporta algorithm. The program is written in C++ and uses classes provided by the GiNaC library to perform the simplifications of the algebraic prefactors in the system of equations. Reduze offers the possibility to run reductions in parallel. Program summaryProgram title:Reduze Catalogue identifier: AEGE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:: yes No. of lines in distributed program, including test data, etc.: 55 433 No. of bytes in distributed program, including test data, etc.: 554 866 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Unix/Linux Number of processors used: The number of processors is problem dependent. More than one possible but not arbitrary many. RAM: Depends on the complexity of the system. Classification: 4.4, 5 External routines: CLN ( http://www.ginac.de/CLN/), GiNaC ( http://www.ginac.de/) Nature of problem: Solving large systems of linear equations with Feynman integrals as unknowns and rational polynomials as prefactors. Solution method: Using a Gauss/Laporta algorithm to solve the system of equations. Restrictions: Limitations depend on the complexity of the system (number of equations, number of kinematic invariants). Running time: Depends on the complexity of the system.

  18. Calculation Method of Lateral Strengths and Ductility Factors of Constructions with Shear Walls of Different Ductility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamaguchi, Nobuyoshi; Nakao, Masato; Murakami, Masahide

    2008-07-08

    For seismic design, ductility-related force modification factors are named R factor in Uniform Building Code of U.S, q factor in Euro Code 8 and Ds (inverse of R) factor in Japanese Building Code. These ductility-related force modification factors for each type of shear elements are appeared in those codes. Some constructions use various types of shear walls that have different ductility, especially for their retrofit or re-strengthening. In these cases, engineers puzzle the decision of force modification factors of the constructions. Solving this problem, new method to calculate lateral strengths of stories for simple shear wall systems is proposed andmore » named 'Stiffness--Potential Energy Addition Method' in this paper. This method uses two design lateral strengths for each type of shear walls in damage limit state and safety limit state. Two lateral strengths of stories in both limit states are calculated from these two design lateral strengths for each type of shear walls in both limit states. Calculated strengths have the same quality as values obtained by strength addition method using many steps of load-deformation data of shear walls. The new method to calculate ductility factors is also proposed in this paper. This method is based on the new method to calculate lateral strengths of stories. This method can solve the problem to obtain ductility factors of stories with shear walls of different ductility.« less

  19. Assessment Position Affects Problem-Solving Behaviors in a Child With Motor Impairments.

    PubMed

    OʼGrady, Michael G; Dusing, Stacey C

    2016-01-01

    The purpose of this report was to examine problem-solving behaviors of a child with significant motor impairments in positions she could maintain independently, in supine and prone positions, as well as a position that required support, sitting. The child was a 22-month-old girl who could not sit independently and had limited independent mobility. Her problem-solving behaviors were assessed using the Early Problem Solving Indicator, while she was placed in supine or prone position, and again in manually supported sitting position. In manually supported sitting position, the subject demonstrated a higher frequency of problem-solving behaviors and her most developmentally advanced problem-solving behavior. Because a child's position may affect cognitive test results, position should be documented at the time of testing.

  20. Embedding Game-Based Problem-Solving Phase into Problem-Posing System for Mathematics Learning

    ERIC Educational Resources Information Center

    Chang, Kuo-En; Wu, Lin-Jung; Weng, Sheng-En; Sung, Yao-Ting

    2012-01-01

    A problem-posing system is developed with four phases including posing problem, planning, solving problem, and looking back, in which the "solving problem" phase is implemented by game-scenarios. The system supports elementary students in the process of problem-posing, allowing them to fully engage in mathematical activities. In total, 92 fifth…

  1. On Solving Systems of Equations by Successive Reduction Using 2×2 Matrices

    ERIC Educational Resources Information Center

    Carley, Holly

    2014-01-01

    Usually a student learns to solve a system of linear equations in two ways: "substitution" and "elimination." While the two methods will of course lead to the same answer they are considered different because the thinking process is different. In this paper the author solves a system in these two ways to demonstrate the…

  2. Effects of an explicit problem-solving skills training program using a metacomponential approach for outpatients with acquired brain injury.

    PubMed

    Fong, Kenneth N K; Howie, Dorothy R

    2009-01-01

    We investigated the effects of an explicit problem-solving skills training program using a metacomponential approach with 33 outpatients with moderate acquired brain injury, in the Hong Kong context. We compared an experimental training intervention with this explicit problem-solving approach, which taught metacomponential strategies, with a conventional cognitive training approach that did not have this explicit metacognitive training. We found significant advantages for the experimental group on the Metacomponential Interview measure in association with the explicit metacomponential training, but transfer to the real-life problem-solving measures was not evidenced in statistically significant findings. Small sample size, limited time of intervention, and some limitations with these tools may have been contributing factors to these results. The training program was demonstrated to have a significantly greater effect than the conventional training approach on metacomponential functioning and the component of problem representation. However, these benefits were not transferable to real-life situations.

  3. A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.

    PubMed

    Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P

    2010-11-01

    The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. A Distributed Problem-Solving Approach to Rule Induction: Learning in Distributed Artificial Intelligence Systems

    DTIC Science & Technology

    1990-11-01

    Intelligence Systems," in Distributed Artifcial Intelligence , vol. II, L. Gasser and M. Huhns (eds), Pitman, London, 1989, pp. 413-430. Shaw, M. Harrow, B...IDTIC FILE COPY A Distributed Problem-Solving Approach to Rule Induction: Learning in Distributed Artificial Intelligence Systems N Michael I. Shaw...SUBTITLE 5. FUNDING NUMBERS A Distributed Problem-Solving Approach to Rule Induction: Learning in Distributed Artificial Intelligence Systems 6

  5. How Mathematics Describes Life

    NASA Astrophysics Data System (ADS)

    Teklu, Abraham

    2017-01-01

    The circle of life is something we have all heard of from somewhere, but we don't usually try to calculate it. For some time we have been working on analyzing a predator-prey model to better understand how mathematics can describe life, in particular the interaction between two different species. The model we are analyzing is called the Holling-Tanner model, and it cannot be solved analytically. The Holling-Tanner model is a very common model in population dynamics because it is a simple descriptor of how predators and prey interact. The model is a system of two differential equations. The model is not specific to any particular set of species and so it can describe predator-prey species ranging from lions and zebras to white blood cells and infections. One thing all these systems have in common are critical points. A critical point is a value for both populations that keeps both populations constant. It is important because at this point the differential equations are equal to zero. For this model there are two critical points, a predator free critical point and a coexistence critical point. Most of the analysis we did is on the coexistence critical point because the predator free critical point is always unstable and frankly less interesting than the coexistence critical point. What we did is consider two regimes for the differential equations, large B and small B. B, A, and C are parameters in the differential equations that control the system where B measures how responsive the predators are to change in the population, A represents predation of the prey, and C represents the satiation point of the prey population. For the large B case we were able to approximate the system of differential equations by a single scalar equation. For the small B case we were able to predict the limit cycle. The limit cycle is a process of the predator and prey populations growing and shrinking periodically. This model has a limit cycle in the regime of small B, that we solved for numerically. With some assumptions to reduce the differential equations we were able to create a system of equations and unknowns to predict the behavior of the limit cycle for small B.

  6. Least Squares Approach to the Alignment of the Generic High Precision Tracking System

    NASA Astrophysics Data System (ADS)

    de Renstrom, Pawel Brückman; Haywood, Stephen

    2006-04-01

    A least squares method to solve a generic alignment problem of a high granularity tracking system is presented. The algorithm is based on an analytical linear expansion and allows for multiple nested fits, e.g. imposing a common vertex for groups of particle tracks is of particular interest. We present a consistent and complete recipe to impose constraints on either implicit or explicit parameters. The method has been applied to the full simulation of a subset of the ATLAS silicon tracking system. The ultimate goal is to determine ≈35,000 degrees of freedom (DoF's). We present a limited scale exercise exploring various aspects of the solution.

  7. Simple simulation training system for short-wave radio station

    NASA Astrophysics Data System (ADS)

    Tan, Xianglin; Shao, Zhichao; Tu, Jianhua; Qu, Fuqi

    2018-04-01

    The short-wave radio station is a most important transmission equipment of our signal corps, but in the actual teaching process, which exist the phenomenon of fewer equipment and more students, making the students' short-wave radio operation and practice time is very limited. In order to solve the above problems, to carry out shortwave radio simple simulation training system development is very necessary. This project is developed by combining hardware and software to simulate the voice communication operation and signal principle of shortwave radio station, and can test the signal flow of shortwave radio station. The test results indicate that this system is simple operation, human-machine interface friendly and can improve teaching more efficiency.

  8. Analysis of randomly time varying systems by gaussian closure technique

    NASA Astrophysics Data System (ADS)

    Dash, P. K.; Iyengar, R. N.

    1982-07-01

    The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.

  9. Visualization of the influence of the air conditioning system to the high-power laser beam quality with the modulation coherent imaging method.

    PubMed

    Tao, Hua; Veetil, Suhas P; Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang

    2015-08-01

    Air conditioning systems can lead to dynamic phase change in the laser beams of high-power laser facilities for the inertial confinement fusion, and this kind of phase change cannot be measured by most of the commonly employed Hartmann wavefront sensor or interferometry due to some uncontrollable factors, such as too large laser beam diameters and the limited space of the facility. It is demonstrated that this problem can be solved using a scheme based on modulation coherent imaging, and thus the influence of the air conditioning system on the performance of the high-power facility can be evaluated directly.

  10. Efficient ICCG on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1989-01-01

    Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.

  11. Tomographic phase microscopy: principles and applications in bioimaging [Invited

    PubMed Central

    Jin, Di; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Tomographic phase microscopy (TPM) is an emerging optical microscopic technique for bioimaging. TPM uses digital holographic measurements of complex scattered fields to reconstruct three-dimensional refractive index (RI) maps of cells with diffraction-limited resolution by solving inverse scattering problems. In this paper, we review the developments of TPM from the fundamental physics to its applications in bioimaging. We first provide a comprehensive description of the tomographic reconstruction physical models used in TPM. The RI map reconstruction algorithms and various regularization methods are discussed. Selected TPM applications for cellular imaging, particularly in hematology, are reviewed. Finally, we examine the limitations of current TPM systems, propose future solutions, and envision promising directions in biomedical research. PMID:29386746

  12. Quantum mechanics on the h-deformed quantum plane

    NASA Astrophysics Data System (ADS)

    Cho, Sunggoo

    1999-03-01

    We find the covariant deformed Heisenberg algebra and the Laplace-Beltrami operator on the extended h-deformed quantum plane and solve the Schrödinger equations explicitly for some physical systems on the quantum plane. In the commutative limit the behaviour of a quantum particle on the quantum plane becomes that of the quantum particle on the Poincaré half-plane, a surface of constant negative Gaussian curvature. We show that the bound state energy spectra for particles under specific potentials depend explicitly on the deformation parameter h. Moreover, it is shown that bound states can survive on the quantum plane in a limiting case where bound states on the Poincaré half-plane disappear.

  13. Solution to the Problems of the Sustainable Development Management

    NASA Astrophysics Data System (ADS)

    Rusko, Miroslav; Procházková, Dana

    2011-01-01

    The paper shows that environment is one of the basic public assets of a human system, and it must be therefore specially protected. According to our present knowledge, the sustainability is necessary for all human systems and it is necessary to invoke the sustainable development principles in all human system assets. Sustainable development is understood as a development that does not erode ecological, social or politic systems on which it depends, but it explicitly approves ecological limitation under the economic activity frame and it has full comprehension for support of human needs. The paper summarises the conditions for sustainable development, tools, methods and techniques to solve the environmental problems and the tasks of executive governance in the environmental segment.

  14. The ascendance of microphysiological systems to solve the drug testing dilemma

    PubMed Central

    Dehne, Eva-Maria; Hasenberg, Tobias; Marx, Uwe

    2017-01-01

    The development of drugs is a process obstructed with manifold security and efficacy concerns. Although animal models are still widely used to meet the diligence required, they are regarded as outdated tools with limited predictability. Novel microphysiological systems intend to create systemic models of human biology. Their ability to host 3D organoid constructs in a controlled microenvironment with mechanical and electrophysiological stimuli enables them to create and maintain homeostasis. These platforms are, thus, envisioned to be superior tools for testing and developing substances such as drugs, cosmetics and chemicals. We will present reasons why microphysiological systems are required for the emerging demands, highlight current technological and regulatory obstacles, and depict possible solutions from state-of-the-art platforms from major contributors. PMID:28670475

  15. The ascendance of microphysiological systems to solve the drug testing dilemma.

    PubMed

    Dehne, Eva-Maria; Hasenberg, Tobias; Marx, Uwe

    2017-06-01

    The development of drugs is a process obstructed with manifold security and efficacy concerns. Although animal models are still widely used to meet the diligence required, they are regarded as outdated tools with limited predictability. Novel microphysiological systems intend to create systemic models of human biology. Their ability to host 3D organoid constructs in a controlled microenvironment with mechanical and electrophysiological stimuli enables them to create and maintain homeostasis. These platforms are, thus, envisioned to be superior tools for testing and developing substances such as drugs, cosmetics and chemicals. We will present reasons why microphysiological systems are required for the emerging demands, highlight current technological and regulatory obstacles, and depict possible solutions from state-of-the-art platforms from major contributors.

  16. Bifurcation theory for finitely smooth planar autonomous differential systems

    NASA Astrophysics Data System (ADS)

    Han, Maoan; Sheng, Lijuan; Zhang, Xiang

    2018-03-01

    In this paper we establish bifurcation theory of limit cycles for planar Ck smooth autonomous differential systems, with k ∈ N. The key point is to study the smoothness of bifurcation functions which are basic and important tool on the study of Hopf bifurcation at a fine focus or a center, and of Poincaré bifurcation in a period annulus. We especially study the smoothness of the first order Melnikov function in degenerate Hopf bifurcation at an elementary center. As we know, the smoothness problem was solved for analytic and C∞ differential systems, but it was not tackled for finitely smooth differential systems. Here, we present their optimal regularity of these bifurcation functions and their asymptotic expressions in the finite smooth case.

  17. Video systems for real-time oil-spill detection

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Arvesen, J. C.; Lewis, P. L.; Woolever, G. F.

    1973-01-01

    Three airborne television systems are being developed to evaluate techniques for oil-spill surveillance. These include a conventional TV camera, two cameras operating in a subtractive mode, and a field-sequential camera. False-color enhancement and wavelength and polarization filtering are also employed. The first of a series of flight tests indicates that an appropriately filtered conventional TV camera is a relatively inexpensive method of improving contrast between oil and water. False-color enhancement improves the contrast, but the problem caused by sun glint now limits the application to overcast days. Future effort will be aimed toward a one-camera system. Solving the sun-glint problem and developing the field-sequential camera into an operable system offers potential for color 'flagging' oil on water.

  18. ELECTRONIC DIGITAL COMPUTER

    DOEpatents

    Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

    1957-10-01

    The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

  19. Numerical Solution of the Three-Dimensional Navier-Stokes Equation.

    DTIC Science & Technology

    1982-03-01

    compressible, viscous fluid in an arbitrary geometry. We wish to use a grid generating scheme so we assume that the geometry of the physical problem given in...bian J of the mapping are provided. (For work on grid generating schemes see [4], [5] or [6).) Hence we must solve the following system of equations...these limitations the data structure used in the ILLIAC code is to partition the grid into 8 x 8 x 8 blocks. A row of these blocks in a given

  20. Heat transfer in a micropolar fluid over a stretching sheet with Newtonian heating.

    PubMed

    Qasim, Muhammad; Khan, Ilyas; Shafie, Sharidan

    2013-01-01

    This article looks at the steady flow of Micropolar fluid over a stretching surface with heat transfer in the presence of Newtonian heating. The relevant partial differential equations have been reduced to ordinary differential equations. The reduced ordinary differential equation system has been numerically solved by Runge-Kutta-Fehlberg fourth-fifth order method. Influence of different involved parameters on dimensionless velocity, microrotation and temperature is examined. An excellent agreement is found between the present and previous limiting results.

  1. Knowledge acquisition from natural language for expert systems based on classification problem-solving methods

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando

    1989-01-01

    It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from natural language descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.

  2. Design concepts for the development of cooperative problem-solving systems

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Mccoy, Elaine; Layton, Chuck; Bihari, Tom

    1992-01-01

    There are many problem-solving tasks that are too complex to fully automate given the current state of technology. Nevertheless, significant improvements in overall system performance could result from the introduction of well-designed computer aids. We have been studying the development of cognitive tools for one such problem-solving task, enroute flight path planning for commercial airlines. Our goal was two-fold. First, we were developing specific systems designs to help with this important practical problem. Second, we are using this context to explore general design concepts to guide in the development of cooperative problem-solving systems. These designs concepts are described.

  3. NEWTON - NEW portable multi-sensor scienTific instrument for non-invasive ON-site characterization of rock from planetary surface and sub-surfaces

    NASA Astrophysics Data System (ADS)

    Díaz-Michelena, M.; de Frutos, J.; Ordóñez, A. A.; Rivero, M. A.; Mesa, J. L.; González, L.; Lavín, C.; Aroca, C.; Sanz, M.; Maicas, M.; Prieto, J. L.; Cobos, P.; Pérez, M.; Kilian, R.; Baeza, O.; Langlais, B.; Thébault, E.; Grösser, J.; Pappusch, M.

    2017-09-01

    In space instrumentation, there is currently no instrument dedicated to susceptibly or complete magnetization measurements of rocks. Magnetic field instrument suites are generally vector (or scalar) magnetometers, which locally measure the magnetic field. When mounted on board rovers, the electromagnetic perturbations associated with motors and other elements make it difficult to reap the benefits from the inclusion of such instruments. However, magnetic characterization is essential to understand key aspects of the present and past history of planetary objects. The work presented here overcomes the limitations currently existing in space instrumentation by developing a new portable and compact multi-sensor instrument for ground breaking high-resolution magnetic characterization of planetary surfaces and sub-surfaces. This new technology introduces for the first time magnetic susceptometry (real and imaginary parts) as a complement to existing compact vector magnetometers for planetary exploration. This work aims to solve the limitations currently existing in space instrumentation by means of providing a new portable and compact multi-sensor instrument for use in space, science and planetary exploration to solve some of the open questions on the crustal and more generally planetary evolution within the Solar System.

  4. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem

    PubMed Central

    Yu, Yi; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm’s performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms. PMID:29324743

  5. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem.

    PubMed

    Yu, Yi; Wu, Yonggang; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm's performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms.

  6. Advanced Computational Methods for Security Constrained Financial Transmission Rights: Structure and Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elbert, Stephen T.; Kalsi, Karanjit; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, a novel non-linear dynamical system (NDS) approach is proposed tomore » solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on large-scale systems using data from the Western Electricity Coordinating Council (WECC). The NDS is demonstrated to outperform the widely used CPLEX algorithms while exhibiting superior scalability. Furthermore, the NDS based solver can be easily parallelized which results in significant computational improvement.« less

  7. Non-Markovian electron dynamics in nanostructures coupled to dissipative contacts

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Knezevic, I.

    2013-02-01

    In quasiballistic semiconductor nanostructures, carrier exchange between the active region and dissipative contacts is the mechanism that governs relaxation. In this paper, we present a theoretical treatment of transient quantum transport in quasiballistic semiconductor nanostructures, which is based on the open system theory and valid on timescales much longer than the characteristic relaxation time in the contacts. The approach relies on a model interaction between the current-limiting active region and the contacts, given in the scattering-state basis. We derive a non-Markovian master equation for the irreversible evolution of the active region's many-body statistical operator by coarse-graining the exact dynamical map over the contact relaxation time. In order to obtain the response quantities of a nanostructure under bias, such as the potential and the charge and current densities, the non-Markovian master equation must be solved numerically together with the Schr\\"{o}dinger, Poisson, and continuity equations. We discuss how to numerically solve this coupled system of equations and illustrate the approach on the example of a silicon nin diode.

  8. Finite Element Analysis of Tube Hydroforming in Non-Symmetrical Dies

    NASA Astrophysics Data System (ADS)

    Nulkar, Abhishek V.; Gu, Randy; Murty, Pilaka

    2011-08-01

    Tube hydroforming has been studied intensively using commercial finite element programs. A great deal of the investigations dealt with models with symmetric cross-sections. It is known that additional constraints due to symmetry may be imposed on the model so that it is properly supported. For a non-symmetric model, these constraints become invalid and the model does not have sufficient support resulting in a singular finite element system. Majority of commercial codes have a limited capability in solving models with insufficient supports. Recently, new algorithms using penalty variable and air-like contact element (ALCE) have been developed to solve positive semi-definite finite element systems such as those in contact mechanics. In this study the ALCE algorithm is first validated by comparing its result against a commercial code using a symmetric model in which a circular tube is formed to polygonal dies with symmetric shapes. Then, the study investigates the accuracy and efficiency of using ALCE in analyzing hydroforming of tubes with various cross-sections in non-symmetrical dies in 2-D finite element settings.

  9. Active mass damper system for high-rise buildings using neural oscillator and position controller considering stroke limitation of the auxiliary mass

    NASA Astrophysics Data System (ADS)

    Hongu, J.; Iba, D.; Nakamura, M.; Moriwaki, I.

    2016-04-01

    This paper proposes a problem-solving method for the stroke limitation problem, which is related to auxiliary masses of active mass damper systems for high-rise buildings. The proposed method is used in a new simple control system for the active mass dampers mimicking the motion of bipedal mammals, which has a neural oscillator synchronizing with the acceleration response of structures and a position controller. In the system, the travel distance and direction of the auxiliary mass of the active mass damper is determined by reference to the output of the neural oscillator, and then, the auxiliary mass is transferred to the decided location by using a PID controller. The one of the purpose of the previouslyproposed system is stroke restriction problem avoidance of the auxiliary mass during large earthquakes by the determination of the desired value within the stroke limitation of the auxiliary mass. However, only applying the limited desired value could not rigorously restrict the auxiliary mass within the limitation, because the excessive inertia force except for the control force produced by the position controller affected on the motion of the auxiliary mass. In order to eliminate the effect on the auxiliary mass by the structural absolute acceleration, a cancellation method is introduced by adding a term to the control force of the position controller. We first develop the previously-proposed system for the active mass damper and the additional term for cancellation, and verity through numerical experiments that the new system is able to operate the auxiliary mass within the restriction during large earthquakes. Based on the comparison of the proposed system with the LQ system, a conclusion was drawn regarding which the proposed neuronal system with the additional term appears to be able to limit the stroke of the auxiliary mass of the AMD.

  10. Metallization failures

    NASA Technical Reports Server (NTRS)

    Beatty, R.

    1971-01-01

    Metallization-related failure mechanisms were shown to be a major cause of integrated circuit failures under accelerated stress conditions, as well as in actual use under field operation. The integrated circuit industry is aware of the problem and is attempting to solve it in one of two ways: (1) better understanding of the aluminum system, which is the most widely used metallization material for silicon integrated circuits both as a single level and multilevel metallization, or (2) evaluating alternative metal systems. Aluminum metallization offers many advantages, but also has limitations particularly at elevated temperatures and high current densities. As an alternative, multilayer systems of the general form, silicon device-metal-inorganic insulator-metal, are being considered to produce large scale integrated arrays. The merits and restrictions of metallization systems in current usage and systems under development are defined.

  11. Design of UAV high resolution image transmission system

    NASA Astrophysics Data System (ADS)

    Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng

    2017-02-01

    In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.

  12. Including Critical Thinking and Problem Solving in Physical Education

    ERIC Educational Resources Information Center

    Pill, Shane; SueSee, Brendan

    2017-01-01

    Many physical education curriculum frameworks include statements about the inclusion of critical inquiry processes and the development of creativity and problem-solving skills. The learning environment created by physical education can encourage or limit the application and development of the learners' cognitive resources for critical and creative…

  13. Productive Failure in STEM Education

    ERIC Educational Resources Information Center

    Trueman, Rebecca J.

    2014-01-01

    Science education is criticized because it often fails to support problem-solving skills in students. Instead, the instructional methods primarily emphasize didactic models that fail to engage students and reveal how the material can be applied to solve real problems. To overcome these limitations, this study asked participants in a general…

  14. Design of a Cognitive Tool to Enhance Problemsolving Performance

    ERIC Educational Resources Information Center

    Lee, Youngmin; Nelson, David

    2005-01-01

    The design of a cognitive tool to support problem-solving performance for external representation of knowledge is described. The limitations of conventional knowledge maps are analyzed in proposing the tool. The design principles and specifications are described. This tool is expected to enhance learners problem-solving performance by allowing…

  15. How do video-based demonstration assessment tasks affect problem-solving process, test anxiety, chemistry anxiety and achievement in general chemistry students?

    NASA Astrophysics Data System (ADS)

    Terrell, Rosalind Stephanie

    2001-12-01

    Because paper-and-pencil testing provides limited knowledge about what students know about chemical phenomena, we have developed video-based demonstrations to broaden measurement of student learning. For example, students might be shown a video demonstrating equilibrium shifts. Two methods for viewing equilibrium shifts are changing the concentration of the reactants and changing the temperature of the system. The students are required to combine the data collected from the video and their knowledge of chemistry to determine which way the equilibrium shifts. Video-based demonstrations are important techniques for measuring student learning because they require students to apply conceptual knowledge learned in class to a specific chemical problem. This study explores how video-based demonstration assessment tasks affect problem-solving processes, test anxiety, chemistry anxiety and achievement in general chemistry students. Several instruments were used to determine students' knowledge about chemistry, students' test and chemistry anxiety before and after treatment. Think-aloud interviews were conducted to determine students' problem-solving processes after treatment. The treatment group was compared to a control group and a group watching video demonstrations. After treatment students' anxiety increased and achievement decreased. There were also no significant differences found in students' problem-solving processes following treatment. These negative findings may be attributed to several factors that will be explored in this study.

  16. Design Process for High Speed Civil Transport Aircraft Improved by Neural Network and Regression Methods

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.

    1998-01-01

    A key challenge in designing the new High Speed Civil Transport (HSCT) aircraft is determining a good match between the airframe and engine. Multidisciplinary design optimization can be used to solve the problem by adjusting parameters of both the engine and the airframe. Earlier, an example problem was presented of an HSCT aircraft with four mixed-flow turbofan engines and a baseline mission to carry 305 passengers 5000 nautical miles at a cruise speed of Mach 2.4. The problem was solved by coupling NASA Lewis Research Center's design optimization testbed (COMETBOARDS) with NASA Langley Research Center's Flight Optimization System (FLOPS). The computing time expended in solving the problem was substantial, and the instability of the FLOPS analyzer at certain design points caused difficulties. In an attempt to alleviate both of these limitations, we explored the use of two approximation concepts in the design optimization process. The two concepts, which are based on neural network and linear regression approximation, provide the reanalysis capability and design sensitivity analysis information required for the optimization process. The HSCT aircraft optimization problem was solved by using three alternate approaches; that is, the original FLOPS analyzer and two approximate (derived) analyzers. The approximate analyzers were calibrated and used in three different ranges of the design variables; narrow (interpolated), standard, and wide (extrapolated).

  17. Computing with dynamical systems based on insulator-metal-transition oscillators

    NASA Astrophysics Data System (ADS)

    Parihar, Abhinav; Shukla, Nikhil; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2017-04-01

    In this paper, we review recent work on novel computing paradigms using coupled oscillatory dynamical systems. We explore systems of relaxation oscillators based on linear state transitioning devices, which switch between two discrete states with hysteresis. By harnessing the dynamics of complex, connected systems, we embrace the philosophy of "let physics do the computing" and demonstrate how complex phase and frequency dynamics of such systems can be controlled, programmed, and observed to solve computationally hard problems. Although our discussion in this paper is limited to insulator-to-metallic state transition devices, the general philosophy of such computing paradigms can be translated to other mediums including optical systems. We present the necessary mathematical treatments necessary to understand the time evolution of these systems and demonstrate through recent experimental results the potential of such computational primitives.

  18. Chosen interval methods for solving linear interval systems with special type of matrix

    NASA Astrophysics Data System (ADS)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  19. The program LOPT for least-squares optimization of energy levels

    NASA Astrophysics Data System (ADS)

    Kramida, A. E.

    2011-02-01

    The article describes a program that solves the least-squares optimization problem for finding the energy levels of a quantum-mechanical system based on a set of measured energy separations or wavelengths of transitions between those energy levels, as well as determining the Ritz wavelengths of transitions and their uncertainties. The energy levels are determined by solving the matrix equation of the problem, and the uncertainties of the Ritz wavenumbers are determined from the covariance matrix of the problem. Program summaryProgram title: LOPT Catalogue identifier: AEHM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19 254 No. of bytes in distributed program, including test data, etc.: 427 839 Distribution format: tar.gz Programming language: Perl v.5 Computer: PC, Mac, Unix workstations Operating system: MS Windows (XP, Vista, 7), Mac OS X, Linux, Unix (AIX) RAM: 3 Mwords or more Word size: 32 or 64 Classification: 2.2 Nature of problem: The least-squares energy-level optimization problem, i.e., finding a set of energy level values that best fits the given set of transition intervals. Solution method: The solution of the least-squares problem is found by solving the corresponding linear matrix equation, where the matrix is constructed using a new method with variable substitution. Restrictions: A practical limitation on the size of the problem N is imposed by the execution time, which scales as N and depends on the computer. Unusual features: Properly rounds the resulting data and formats the output in a format suitable for viewing with spreadsheet editing software. Estimates numerical errors resulting from the limited machine precision. Running time: 1 s for N=100, or 60 s for N=400 on a typical PC.

  20. A Python Program for Solving Schro¨dinger's Equation in Undergraduate Physical Chemistry

    ERIC Educational Resources Information Center

    Srnec, Matthew N.; Upadhyay, Shiv; Madura, Jeffry D.

    2017-01-01

    In undergraduate physical chemistry, Schrödinger's equation is solved for a variety of cases. In doing so, the energies and wave functions of the system can be interpreted to provide connections with the physical system being studied. Solving this equation by hand for a one-dimensional system is a manageable task, but it becomes time-consuming…

  1. Exactly solved models on planar graphs with vertices in {Z}^3

    NASA Astrophysics Data System (ADS)

    Kels, Andrew P.

    2017-12-01

    It is shown how exactly solved edge interaction models on the square lattice, may be extended onto more general planar graphs, with edges connecting a subset of next nearest neighbour vertices of {Z}3 . This is done by using local deformations of the square lattice, that arise through the use of the star-triangle relation. Similar to Baxter’s Z-invariance property, these local deformations leave the partition function invariant up to some simple factors coming from the star-triangle relation. The deformations used here extend the usual formulation of Z-invariance, by requiring the introduction of oriented rapidity lines which form directed closed paths in the rapidity graph of the model. The quasi-classical limit is also considered, in which case the deformations imply a classical Z-invariance property, as well as a related local closure relation, for the action functional of a system of classical discrete Laplace equations.

  2. Complete Sets of Radiating and Nonradiating Parts of a Source and Their Fields with Applications in Inverse Scattering Limited-Angle Problems

    PubMed Central

    Louis, A. K.

    2006-01-01

    Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060

  3. Clinical Neuropathology Views - 2/2016: Digital networking in European neuropathology: An initiative to facilitate truly interactive consultations.

    PubMed

    Idoate, Miguel A; García-Rojo, Marcial

    2016-01-01

    Digital technology is progressively changing our vision of the practice of neuropathology. There are a number of facts that support the introduction of digital neuropathology. With the development of wholeslide imaging (WSI) systems the difficulties involved in implementing a neuropathology network have been solved. A relevant difficulty has been image standardization, but an open digital image communication protocol defined by the Digital Imaging and Communications in Medicine (DICOM) standard is already a reality. The neuropathology network should be established in Europe because it is the expected geographic context for relationships among European neuropathologists. There are several limitations in the implementation of a digital neuropathology consultancy network such as financial support, operational costs, legal issues, and technical assistance of clients. All of these items have been considered and should be solved before implementing the proposal. Finally, the authors conclude that a European digital neuropathology network should be created for patients' benefit.

  4. Clay Improvement with Burned Olive Waste Ash

    PubMed Central

    Mutman, Utkan

    2013-01-01

    Olive oil is concentrated in the Mediterranean basin countries. Since the olive oil industries are incriminated for a high quantity of pollution, it has become imperative to solve this problem by developing optimized systems for the treatment of olive oil wastes. This study proposes a solution to the problem. Burned olive waste ash is evaluated for using it as clay stabilizer. In a laboratory, bentonite clay is used to improve olive waste ash. Before the laboratory, the olive waste is burned at 550°C in the high temperature oven. The burned olive waste ash was added to bentonite clay with increasing 1% by weight from 1% to 10%. The study consisted of the following tests on samples treated with burned olive waste ash: Atterberg Limits, Standard Proctor Density, and Unconfined Compressive Strength Tests. The test results show promise for this material to be used as stabilizer and to solve many of the problems associated with its accumulation. PMID:23766671

  5. Solving Lauricella string scattering amplitudes through recurrence relations

    NASA Astrophysics Data System (ADS)

    Lai, Sheng-Hong; Lee, Jen-Chi; Lee, Taejin; Yang, Yi

    2017-09-01

    We show that there exist infinite number of recurrence relations valid for all energies among the open bosonic string scattering amplitudes (SSA) of three tachyons and one arbitrary string state, or the Lauricella SSA. Moreover, these infinite number of recurrence relations can be used to solve all the Lauricella SSA and express them in terms of one single four tachyon amplitude. These results extend the solvability of SSA at the high energy, fixed angle scattering limit and those at the Regge scattering limit discovered previously to all kinematic regimes.

  6. Student Modeling Based on Problem Solving Times

    ERIC Educational Resources Information Center

    Pelánek, Radek; Jarušek, Petr

    2015-01-01

    Student modeling in intelligent tutoring systems is mostly concerned with modeling correctness of students' answers. As interactive problem solving activities become increasingly common in educational systems, it is useful to focus also on timing information associated with problem solving. We argue that the focus on timing is natural for certain…

  7. Continuous time random walk with local particle-particle interaction

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Jiang, Guancheng

    2018-05-01

    The continuous time random walk (CTRW) is often applied to the study of particle motion in disordered media. Yet most such applications do not allow for particle-particle (walker-walker) interaction. In this paper, we consider a CTRW with particle-particle interaction; however, for simplicity, we restrain the interaction to be local. The generalized Chapman-Kolmogorov equation is modified by introducing a perturbation function that fluctuates around 1, which models the effect of interaction. Subsequently, a time-fractional nonlinear advection-diffusion equation is derived from this walking system. Under the initial condition of condensed particles at the origin and the free-boundary condition, we numerically solve this equation with both attractive and repulsive particle-particle interactions. Moreover, a Monte Carlo simulation is devised to verify the results of the above numerical work. The equation and the simulation unanimously predict that this walking system converges to the conventional one in the long-time limit. However, for systems where the free-boundary condition and long-time limit are not simultaneously satisfied, this convergence does not hold.

  8. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  9. Optimization of waste heat utilization in cold end system of thermal power station based on neural network algorithm

    NASA Astrophysics Data System (ADS)

    Du, Zenghui

    2018-04-01

    At present, the flue gas waste heat utilization projects of coal-fired boilers are often limited by low temperature corrosion problems and conventional PID control. The flue gas temperature cannot be reduced to the best efficiency temperature of wet desulphurization, resulting in the failure of heat recovery to be the maximum. Therefore, this paper analyzes, researches and solves the remaining problems of the cold end system of thermal power station, so as to provide solutions and theoretical support for energy saving and emission reduction and upgrading and the improvement of the comprehensive efficiency of the units.

  10. Symbiotic organisms search algorithm for dynamic economic dispatch with valve-point effects

    NASA Astrophysics Data System (ADS)

    Sonmez, Yusuf; Kahraman, H. Tolga; Dosoglu, M. Kenan; Guvenc, Ugur; Duman, Serhat

    2017-05-01

    In this study, symbiotic organisms search (SOS) algorithm is proposed to solve the dynamic economic dispatch with valve-point effects problem, which is one of the most important problems of the modern power system. Some practical constraints like valve-point effects, ramp rate limits and prohibited operating zones have been considered as solutions. Proposed algorithm was tested on five different test cases in 5 units, 10 units and 13 units systems. The obtained results have been compared with other well-known metaheuristic methods reported before. Results show that proposed algorithm has a good convergence and produces better results than other methods.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya

    The sudden release of toxic contaminants that reach indoor spaces can be hazardousto building occupants. To respond effectively, the contaminant release must be quicklydetected and characterized to determine unobserved parameters, such as release locationand strength. Characterizing the release requires solving an inverse problem. Designinga robust real-time sensor system that solves the inverse problem is challenging becausethe fate and transport of contaminants is complex, sensor information is limited andimperfect, and real-time estimation is computationally constrained.This dissertation uses a system-level approach, based on a Bayes Monte Carloframework, to develop sensor-system design concepts and methods. I describe threeinvestigations that explore complex relationships amongmore » sensors, network architecture,interpretation algorithms, and system performance. The investigations use data obtainedfrom tracer gas experiments conducted in a real building. The influence of individual sensor characteristics on the sensor-system performance for binary-type contaminant sensors is analyzed. Performance tradeoffs among sensor accuracy, threshold level and response time are identified; these attributes could not be inferred without a system-level analysis. For example, more accurate but slower sensors are found to outperform less accurate but faster sensors. Secondly, I investigate how the sensor-system performance can be understood in terms of contaminant transport processes and the model representation that is used to solve the inverse problem. The determination of release location and mass are shown to be related to and constrained by transport and mixing time scales. These time scales explain performance differences among different sensor networks. For example, the effect of longer sensor response times is comparably less for releases with longer mixing time scales. The third investigation explores how information fusion from heterogeneous sensors may improve the sensor-system performance and offset the need for more contaminant sensors. Physics- and algorithm-based frameworks are presented for selecting and fusing information from noncontaminant sensors. The frameworks are demonstrated with door-position sensors, which are found to be more useful in natural airflow conditions, but which cannot compensate for poor placement of contaminant sensors. The concepts and empirical findings have the potential to help in the design of sensor systems for more complex building systems. The research has broader relevance to additional environmental monitoring problems, fault detection and diagnostics, and system design.« less

  12. Ontological Problem-Solving Framework for Dynamically Configuring Sensor Systems and Algorithms

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The deployment of ubiquitous sensor systems and algorithms has led to many challenges, such as matching sensor systems to compatible algorithms which are capable of satisfying a task. Compounding the challenges is the lack of the requisite knowledge models needed to discover sensors and algorithms and to subsequently integrate their capabilities to satisfy a specific task. A novel ontological problem-solving framework has been designed to match sensors to compatible algorithms to form synthesized systems, which are capable of satisfying a task and then assigning the synthesized systems to high-level missions. The approach designed for the ontological problem-solving framework has been instantiated in the context of a persistence surveillance prototype environment, which includes profiling sensor systems and algorithms to demonstrate proof-of-concept principles. Even though the problem-solving approach was instantiated with profiling sensor systems and algorithms, the ontological framework may be useful with other heterogeneous sensing-system environments. PMID:22163793

  13. Lindemann histograms as a new method to analyse nano-patterns and phases

    NASA Astrophysics Data System (ADS)

    Makey, Ghaith; Ilday, Serim; Tokel, Onur; Ibrahim, Muhamet; Yavuz, Ozgun; Pavlov, Ihor; Gulseren, Oguz; Ilday, Omer

    The detection, observation, and analysis of material phases and atomistic patterns are of great importance for understanding systems exhibiting both equilibrium and far-from-equilibrium dynamics. As such, there is intense research on phase transitions and pattern dynamics in soft matter, statistical and nonlinear physics, and polymer physics. In order to identify phases and nano-patterns, the pair correlation function is commonly used. However, this approach is limited in terms of recognizing competing patterns in dynamic systems, and lacks visualisation capabilities. In order to solve these limitations, we introduce Lindemann histogram quantification as an alternative method to analyse solid, liquid, and gas phases, along with hexagonal, square, and amorphous nano-pattern symmetries. We show that the proposed approach based on Lindemann parameter calculated per particle maps local number densities to material phase or particles pattern. We apply the Lindemann histogram method on dynamical colloidal self-assembly experimental data and identify competing patterns.

  14. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  15. Enroute flight planning: Evaluating design concepts for the development of cooperative problem-solving systems

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.

    1995-01-01

    There are many problem-solving tasks that are too complex to fully automate given the current state of technology. Nevertheless, significant improvements in overall system performance could result from the introduction of well-designed computer aids. We have been studying the development of cognitive tools for one such problem-solving task, enroute flight path planning for commercial airlines. Our goal has been two-fold. First, we have been developing specific system designs to help with this important practical problem. Second, we have been using this context to explore general design concepts to guide in the development of cooperative problem-solving systems. These design concepts are described below, along with illustrations of their application.

  16. Fully coupled simulation of cosmic reionization. I. numerical methods and tests

    DOE PAGES

    Norman, Michael L.; Reynolds, Daniel R.; So, Geoffrey C.; ...

    2015-01-09

    Here, we describe an extension of the Enzo code to enable fully coupled radiation hydrodynamical simulation of inhomogeneous reionization in large similar to(100 Mpc)(3) cosmological volumes with thousands to millions of point sources. We solve all dynamical, radiative transfer, thermal, and ionization processes self-consistently on the same mesh, as opposed to a postprocessing approach which coarse-grains the radiative transfer. But, we employ a simple subgrid model for star formation which we calibrate to observations. The numerical method presented is a modification of an earlier method presented in Reynolds et al. differing principally in the operator splitting algorithm we use tomore » advance the system of equations. Radiation transport is done in the gray flux-limited diffusion (FLD) approximation, which is solved by implicit time integration split off from the gas energy and ionization equations, which are solved separately. This results in a faster and more robust scheme for cosmological applications compared to the earlier method. The FLD equation is solved using the hypre optimally scalable geometric multigrid solver from LLNL. By treating the ionizing radiation as a grid field as opposed to rays, our method is scalable with respect to the number of ionizing sources, limited only by the parallel scaling properties of the radiation solver. We test the speed and accuracy of our approach on a number of standard verification and validation tests. We show by direct comparison with Enzo's adaptive ray tracing method Moray that the well-known inability of FLD to cast a shadow behind opaque clouds has a minor effect on the evolution of ionized volume and mass fractions in a reionization simulation validation test. Finally, we illustrate an application of our method to the problem of inhomogeneous reionization in a 80 Mpc comoving box resolved with 3200(3) Eulerian grid cells and dark matter particles.« less

  17. Finite difference and Runge-Kutta methods for solving vibration problems

    NASA Astrophysics Data System (ADS)

    Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi

    2017-11-01

    The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.

  18. Digitized adiabatic quantum computing with a superconducting circuit.

    PubMed

    Barends, R; Shabani, A; Lamata, L; Kelly, J; Mezzacapo, A; Las Heras, U; Babbush, R; Fowler, A G; Campbell, B; Chen, Yu; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Lucero, E; Megrant, A; Mutus, J Y; Neeley, M; Neill, C; O'Malley, P J J; Quintana, C; Roushan, P; Sank, D; Vainsencher, A; Wenner, J; White, T C; Solano, E; Neven, H; Martinis, John M

    2016-06-09

    Quantum mechanics can help to solve complex problems in physics and chemistry, provided they can be programmed in a physical device. In adiabatic quantum computing, a system is slowly evolved from the ground state of a simple initial Hamiltonian to a final Hamiltonian that encodes a computational problem. The appeal of this approach lies in the combination of simplicity and generality; in principle, any problem can be encoded. In practice, applications are restricted by limited connectivity, available interactions and noise. A complementary approach is digital quantum computing, which enables the construction of arbitrary interactions and is compatible with error correction, but uses quantum circuit algorithms that are problem-specific. Here we combine the advantages of both approaches by implementing digitized adiabatic quantum computing in a superconducting system. We tomographically probe the system during the digitized evolution and explore the scaling of errors with system size. We then let the full system find the solution to random instances of the one-dimensional Ising problem as well as problem Hamiltonians that involve more complex interactions. This digital quantum simulation of the adiabatic algorithm consists of up to nine qubits and up to 1,000 quantum logic gates. The demonstration of digitized adiabatic quantum computing in the solid state opens a path to synthesizing long-range correlations and solving complex computational problems. When combined with fault-tolerance, our approach becomes a general-purpose algorithm that is scalable.

  19. Closed-form Static Analysis with Inertia Relief and Displacement-Dependent Loads Using a MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1995-01-01

    Solving for the displacements of free-free coupled systems acted upon by static loads is commonly performed throughout the aerospace industry. Many times, these problems are solved using static analysis with inertia relief. This solution technique allows for a free-free static analysis by balancing the applied loads with inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus displacement-dependent loads. Solving for the final displacements of such systems is commonly performed using iterative solution techniques. Unfortunately, these techniques can be time-consuming and labor-intensive. Since the coupled system equations for free-free systems with displacement-dependent loads can be written in closed-form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. Using a MSC/NASTRAN DMAP Alter, displacement-dependent loads have been included in static analysis with inertia relief. Such an Alter has been used successfully to solve efficiently a common aerospace problem typically solved using an iterative technique.

  20. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    NASA Technical Reports Server (NTRS)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  1. Problems of Complex Systems: A Model of System Problem Solving Applied to Schools.

    ERIC Educational Resources Information Center

    Cooke, Robert A.; Rousseau, Denise M.

    Research of 25 Michigan elementary and secondary public schools is used to test a model relating organizations' problem-solving adequacy to their available inputs or resources and to the appropriateness of their structures. Problems that all organizations must solve, to avoid disorganization or entropy, include (1) getting inputs and producing…

  2. A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.

    PubMed

    Li, Shuai; Li, Yangming; Wang, Zheng

    2013-03-01

    This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. When procedures discourage insight: epistemological consequences of prompting novice physics students to construct force diagrams

    NASA Astrophysics Data System (ADS)

    Kuo, Eric; Hallinen, Nicole R.; Conlin, Luke D.

    2017-05-01

    One aim of school science instruction is to help students become adaptive problem solvers. Though successful at structuring novice problem solving, step-by-step problem-solving frameworks may also constrain students' thinking. This study utilises a paradigm established by Heckler [(2010). Some consequences of prompting novice physics students to construct force diagrams. International Journal of Science Education, 32(14), 1829-1851] to test how cuing the first step in a standard framework affects undergraduate students' approaches and evaluation of solutions in physics problem solving. Specifically, prompting the construction of a standard diagram before problem solving increases the use of standard procedures, decreasing the use of a conceptual shortcut. Providing a diagram prompt also lowers students' ratings of informal approaches to similar problems. These results suggest that reminding students to follow typical problem-solving frameworks limits their views of what counts as good problem solving.

  4. Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving

    PubMed Central

    Semeniuk, Yulia Yuriyivna; Brown, Roger L.; Riesch, Susan K.

    2016-01-01

    We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem solving skill. The intervention is based on the Circumplex Model and Social Problem Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. PMID:26936844

  5. Enhancement of problem solving ability of high school students through learning with real engagement in active problem solving (REAPS) model on the concept of heat transfer

    NASA Astrophysics Data System (ADS)

    Yulindar, A.; Setiawan, A.; Liliawati, W.

    2018-05-01

    This study aims to influence the enhancement of problem solving ability before and after learning using Real Engagement in Active Problem Solving (REAPS) model on the concept of heat transfer. The research method used is quantitative method with 35 high school students in Pontianak as sample. The result of problem solving ability of students is obtained through the test in the form of 3 description questions. The instrument has tested the validity by the expert judgment and field testing that obtained the validity value of 0.84. Based on data analysis, the value of N-Gain is 0.43 and the enhancement of students’ problem solving ability is in medium category. This was caused of students who are less accurate in calculating the results of answers and they also have limited time in doing the questions given.

  6. Diffraction-limited storage-ring vacuum technology

    PubMed Central

    Al-Dmour, Eshraq; Ahlback, Jonny; Einfeld, Dieter; Tavares, Pedro Fernandes; Grabski, Marek

    2014-01-01

    Some of the characteristics of recent ultralow-emittance storage-ring designs and possibly future diffraction-limited storage rings are a compact lattice combined with small magnet apertures. Such requirements present a challenge for the design and performance of the vacuum system. The vacuum system should provide the required vacuum pressure for machine operation and be able to handle the heat load from synchrotron radiation. Small magnet apertures result in the conductance of the chamber being low, and lumped pumps are ineffective. One way to provide the required vacuum level is by distributed pumping, which can be realised by the use of a non-evaporable getter (NEG) coating of the chamber walls. It may not be possible to use crotch absorbers to absorb the heat from the synchrotron radiation because an antechamber is difficult to realise with such a compact lattice. To solve this, the chamber walls can work as distributed absorbers if they are made of a material with good thermal conductivity, and distributed cooling is used at the location where the synchrotron radiation hits the wall. The vacuum system of the 3 GeV storage ring of MAX IV is used as an example of possible solutions for vacuum technologies for diffraction-limited storage rings. PMID:25177979

  7. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  8. Word Problem Strategy for Latino English Language Learners at Risk for Math Disabilities

    ERIC Educational Resources Information Center

    Orosco, Michael J.

    2014-01-01

    "English Language Learners" (ELLs) at risk for "math disabilities" (MD) are challenged in solving word problems for numerous reasons such as (a) learning English as a second language, (b) limited experience using math vocabulary, and (c) lack of strategies to improve word-problem-solving skills. As a result of these…

  9. English Skills for Life Sciences: Problem Solving in Biology. Tutor Version [and] Student Version.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Center for Language Education and Research.

    This manual is part of a series of materials designed to reinforce essential concepts in physical science through interactive, language-sensitive, problem-solving exercises emphasizing cooperative learning. The materials are intended for limited-English-proficient (LEP) students in beginning physical science classes. The materials are for teams of…

  10. Deep Learning towards Expertise Development in a Visualization-Based Learning Environment

    ERIC Educational Resources Information Center

    Yuan, Bei; Wang, Minhong; Kushniruk, Andre W.; Peng, Jun

    2017-01-01

    With limited problem-solving capability and practical experience, novices have difficulties developing expert-like performance. It is important to make the complex problem-solving process visible to learners and provide them with necessary help throughout the process. This study explores the design and effects of a model-based learning approach…

  11. The role of retrieval practice in memory and analogical problem-solving.

    PubMed

    Hostetter, Autumn B; Penix, Elizabeth A; Norman, Mackenzie Z; Batsell, W Robert; Carr, Thomas H

    2018-05-01

    Retrieval practice (e.g., testing) has been shown to facilitate long-term retention of information. In two experiments, we examine whether retrieval practice also facilitates use of the practised information when it is needed to solve analogous problems. When retrieval practice was not limited to the information most relevant to the problems (Experiment 1), it improved memory for the information a week later compared with copying or rereading the information, although we found no evidence that it improved participants' ability to apply the information to the problems. In contrast, when retrieval practice was limited to only the information most relevant to the problems (Experiment 2), we found that retrieval practice enhanced memory for the critical information, the ability to identify the schematic similarities between the two sources of information, and the ability to apply that information to solve an analogous problem after a hint was given to do so. These results suggest that retrieval practice, through its effect on memory, can facilitate application of information to solve novel problems but has minimal effects on spontaneous realisation that the information is relevant.

  12. Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.

    2018-03-01

    Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.

  13. Fluctuation-controlled front propagation

    NASA Astrophysics Data System (ADS)

    Ridgway, Douglas Thacher

    1997-09-01

    A number of fundamental pattern-forming systems are controlled by fluctuations at the front. These problems involve the interaction of an infinite dimensional probability distribution with a strongly nonlinear, spatially extended pattern-forming system. We have examined fluctuation-controlled growth in the context of the specific problems of diffusion-limited growth and biological evolution. Mean field theory of diffusion-limited growth exhibits a finite time singularity. Near the leading edge of a diffusion-limited front, this leads to acceleration and blowup. This may be resolved, in an ad hoc manner, by introducing a cutoff below which growth is weakened or eliminated (8). This model, referred to as the BLT model, captures a number of qualitative features of global pattern formation in diffusion-limited aggregation: contours of the mean field match contours of averaged particle density in simulation, and the modified mean field theory can form dendritic features not possible in the naive mean field theory. The morphology transition between dendritic and non-dendritic global patterns requires that BLT fronts have a Mullins-Sekerka instability of the wavefront shape, in order to form concave patterns. We compute the stability of BLT fronts numerically, and compare the results to fronts without a cutoff. A significant morphological instability of the BLT fronts exists, with a dominant wavenumber on the scale of the front width. For standard mean field fronts, no instability is found. The naive and ad hoc mean field theories are continuum-deterministic models intended to capture the behavior of a discrete stochastic system. A transformation which maps discrete systems into a continuum model with a singular multiplicative noise is known, however numerical simulations of the continuum stochastic system often give mean field behavior instead of the critical behavior of the discrete system. We have found a new interpretation of the singular noise, based on maintaining the symmetry of the absorbing state, but which is unsuccessful at capturing the behavior of diffusion-limited growth. In an effort to find a simpler model system, we turned to modelling fitness increases in evolution. The work was motivated by an experiment on vesicular stomatitis virus, a short (˜9600bp) single-stranded RNA virus. A highly bottlenecked viral population increases in fitness rapidly until a certain point, after which the fitness increases at a slower rate. This is well modeled by a constant population reproducing and mutating on a smooth fitness landscape. Mean field theory of this system displays the same infinite propagation velocity blowup as mean field diffusion-limited aggregation. However, we have been able to make progress on a number of fronts. One is solving systems of moment equations, where a hierarchy of moments is truncated arbitrarily at some level. Good results for front propagation velocity are found with just two moments, corresponding to inclusion of the basic finite population clustering effect ignored by mean field theory. In addition, for small mutation rates, most of the population will be entirely on a single site or two adjacent sites, and the density of these cases can be described and solved. (Abstract shortened by UMI.)

  14. Upper Limits for Power Yield in Thermal, Chemical, and Electrochemical Systems

    NASA Astrophysics Data System (ADS)

    Sieniutycz, Stanislaw

    2010-03-01

    We consider modeling and power optimization of energy converters, such as thermal, solar and chemical engines and fuel cells. Thermodynamic principles lead to expressions for converter's efficiency and generated power. Efficiency equations serve to solve the problems of upgrading or downgrading a resource. Power yield is a cumulative effect in a system consisting of a resource, engines, and an infinite bath. While optimization of steady state systems requires using the differential calculus and Lagrange multipliers, dynamic optimization involves variational calculus and dynamic programming. The primary result of static optimization is the upper limit of power, whereas that of dynamic optimization is a finite-rate counterpart of classical reversible work (exergy). The latter quantity depends on the end state coordinates and a dissipation index, h, which is the Hamiltonian of the problem of minimum entropy production. In reacting systems, an active part of chemical affinity constitutes a major component of the overall efficiency. The theory is also applied to fuel cells regarded as electrochemical flow engines. Enhanced bounds on power yield follow, which are stronger than those predicted by the reversible work potential.

  15. Review of water disinfection techniques

    NASA Technical Reports Server (NTRS)

    Colombo, Gerald V.; Sauer, Richard L.

    1987-01-01

    Throughout the history of manned space flight the supply of potable water to the astronauts has presented unique problems. Of particular concern has been the microbiological quality of the potable water. This has required the development of both preflight water system servicing procedures to disinfect the systems and inflight disinfectant addition and monitoring devices to ensure continuing microbiological control. The disinfectants successfully used to date have been aqueous chlorine or iodine. Because of special system limitations the use of iodine has been the most successful for inflight use and promises to be the agent most likely to be used in the future. Future spacecraft potable, hygiene, and experiment water systems will utilize recycled water. This will present special problems for water quality control. NASA is currently conducting research and development to solve these problems.

  16. Designing collective behavior in a termite-inspired robot construction team.

    PubMed

    Werfel, Justin; Petersen, Kirstin; Nagpal, Radhika

    2014-02-14

    Complex systems are characterized by many independent components whose low-level actions produce collective high-level results. Predicting high-level results given low-level rules is a key open challenge; the inverse problem, finding low-level rules that give specific outcomes, is in general still less understood. We present a multi-agent construction system inspired by mound-building termites, solving such an inverse problem. A user specifies a desired structure, and the system automatically generates low-level rules for independent climbing robots that guarantee production of that structure. Robots use only local sensing and coordinate their activity via the shared environment. We demonstrate the approach via a physical realization with three autonomous climbing robots limited to onboard sensing. This work advances the aim of engineering complex systems that achieve specific human-designed goals.

  17. A new prize system for drug innovation.

    PubMed

    Gandjour, Afschin; Chernyak, Nadja

    2011-10-01

    We propose a new prize (reward) system for drug innovation which pays a price based on the value of health benefits accrued over time. Willingness to pay for a unit of health benefit is determined based on the cost-effectiveness ratio of palliative/nursing care. We solve the problem of limited information on the value of health benefits by mathematically relating reward size to the uncertainty of information including information on potential drug overuse. The proposed prize system offers optimal incentives to invest in research and development because it rewards the innovator for the social value of drug innovation. The proposal is envisaged as a non-voluntary alternative to the current patent system and reduces excessive marketing of innovators and generic drug producers. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunakov, V. E., E-mail: bunakov@VB13190.spb.edu

    A critical analysis of the present-day concept of chaos in quantum systems as nothing but a “quantum signature” of chaos in classical mechanics is given. In contrast to the existing semi-intuitive guesses, a definition of classical and quantum chaos is proposed on the basis of the Liouville–Arnold theorem: a quantum chaotic system featuring N degrees of freedom should have M < N independent first integrals of motion (good quantum numbers) specified by the symmetry of the Hamiltonian of the system. Quantitative measures of quantum chaos that, in the classical limit, go over to the Lyapunov exponent and the classical stabilitymore » parameter are proposed. The proposed criteria of quantum chaos are applied to solving standard problems of modern dynamical chaos theory.« less

  19. Bounded parametric control of plane motions of space tethered system

    NASA Astrophysics Data System (ADS)

    Bezglasnyi, S. P.; Mukhametzyanova, A. A.

    2018-05-01

    This paper is focused on the problem of control of plane motions of a space tethered system (STS). The STS is modeled as a heavy rod with two point masses. Point masses are fixed on the rod. A third point mass can move along the rod. The control is realized as a continuous change of the distance from the centre of mass of the tethered system to the movable mass. New limited control laws processes of excitation and damping are built. Diametric reorientation and gravitational stabilization to the local vertical of an STS were obtained. The problem is solved by the method of Lyapunov's functions of the classical theory of stability. The theoretical results are confirmed by numerical calculations.

  20. Krylov subspace methods - Theory, algorithms, and applications

    NASA Technical Reports Server (NTRS)

    Sad, Youcef

    1990-01-01

    Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.

  1. Macroeconomic policy, growth, and biodiversity conservation.

    PubMed

    Lawn, Philip

    2008-12-01

    To successfully achieve biodiversity conservation, the amount of ecosystem structure available for economic production must be determined by, and subject to, conservation needs. As such, the scale of economic systems must remain within the limits imposed by the need to preserve critical ecosystems and the regenerative and waste assimilative capacities of the ecosphere. These limits are determined by biophysical criteria, yet macroeconomics involves the use of economic instruments designed to meet economic criteria that have no capacity to achieve biophysically based targets. Macroeconomic policy cannot, therefore, directly solve the biodiversity erosion crisis. Nevertheless, good macroeconomic policy is still important given that bad macroeconomy policy is likely to reduce human well-being and increase the likelihood of social upheaval that could undermine conservation efforts.

  2. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    NASA Astrophysics Data System (ADS)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  3. HFEM3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Chester J

    Software solves the three-dimensional Poisson equation div(k(grad(u)) = f, by the finite element method for the case when material properties, k, are distributed over hierarchy of edges, facets and tetrahedra in the finite element mesh. Method is described in Weiss, CJ, Finite element analysis for model parameters distributed on a hierarchy of geometric simplices, Geophysics, v82, E155-167, doi:10.1190/GEO2017-0058.1 (2017). A standard finite element method for solving Poisson’s equation is augmented by including in the 3D stiffness matrix additional 2D and 1D stiffness matrices representing the contributions from material properties associated with mesh faces and edges, respectively. The resulting linear systemmore » is solved iteratively using the conjugate gradient method with Jacobi preconditioning. To minimize computer storage for program execution, the linear solver computes matrix-vector contractions element-by-element over the mesh, without explicit storage of the global stiffness matrix. Program output vtk compliant for visualization and rendering by 3rd party software. Program uses dynamic memory allocation and as such there are no hard limits on problem size outside of those imposed by the operating system and configuration on which the software is run. Dimension, N, of the finite element solution vector is constrained by the the addressable space in 32-vs-64 bit operating systems. Total storage requirements for the problem. Total working space required for the program is approximately 13*N double precision words.« less

  4. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    PubMed Central

    Vergara-Perez, Sandra; Marucho, Marcelo

    2015-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules. PMID:26924848

  5. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    PubMed

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  6. Multiagent optimization system for solving the traveling salesman problem (TSP).

    PubMed

    Xie, Xiao-Feng; Liu, Jiming

    2009-04-01

    The multiagent optimization system (MAOS) is a nature-inspired method, which supports cooperative search by the self-organization of a group of compact agents situated in an environment with certain sharing public knowledge. Moreover, each agent in MAOS is an autonomous entity with personal declarative memory and behavioral components. In this paper, MAOS is refined for solving the traveling salesman problem (TSP), which is a classic hard computational problem. Based on a simplified MAOS version, in which each agent manipulates on extremely limited declarative knowledge, some simple and efficient components for solving TSP, including two improving heuristics based on a generalized edge assembly recombination, are implemented. Compared with metaheuristics in adaptive memory programming, MAOS is particularly suitable for supporting cooperative search. The experimental results on two TSP benchmark data sets show that MAOS is competitive as compared with some state-of-the-art algorithms, including the Lin-Kernighan-Helsgaun, IBGLK, PHGA, etc., although MAOS does not use any explicit local search during the runtime. The contributions of MAOS components are investigated. It indicates that certain clues can be positive for making suitable selections before time-consuming computation. More importantly, it shows that the cooperative search of agents can achieve an overall good performance with a macro rule in the switch mode, which deploys certain alternate search rules with the offline performance in negative correlations. Using simple alternate rules may prevent the high difficulty of seeking an omnipotent rule that is efficient for a large data set.

  7. Relationship between Systems Coaching and Problem-Solving Implementation Fidelity in a Response-to-Intervention Model

    ERIC Educational Resources Information Center

    March, Amanda L.; Castillo, Jose M.; Batsche, George M.; Kincaid, Donald

    2016-01-01

    The literature on RTI has indicated that professional development and coaching are critical to facilitating problem-solving implementation with fidelity. This study examined the extent to which systems coaching related to the fidelity of problem-solving implementation in 31 schools from six districts. Schools participated in three years of a…

  8. A Case Study in an Integrated Development and Problem Solving Environment

    ERIC Educational Resources Information Center

    Deek, Fadi P.; McHugh, James A.

    2003-01-01

    This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…

  9. Models of human problem solving - Detection, diagnosis, and compensation for system failures

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1983-01-01

    The role of the human operator as a problem solver in man-machine systems such as vehicles, process plants, transportation networks, etc. is considered. Problem solving is discussed in terms of detection, diagnosis, and compensation. A wide variety of models of these phases of problem solving are reviewed and specifications for an overall model outlined.

  10. Using a Recommendation System to Support Problem Solving and Case-Based Reasoning Retrieval

    ERIC Educational Resources Information Center

    Tawfik, Andrew A.; Alhoori, Hamed; Keene, Charles Wayne; Bailey, Christian; Hogan, Maureen

    2018-01-01

    In case library learning environments, learners are presented with an array of narratives that can be used to guide their problem solving. However, according to theorists, learners struggle to identify and retrieve the optimal case to solve a new problem. Given the challenges novice face during case retrieval, recommender systems can be embedded…

  11. A Mixed Integer Efficient Global Optimization Framework: Applied to the Simultaneous Aircraft Design, Airline Allocation and Revenue Management Problem

    NASA Astrophysics Data System (ADS)

    Roy, Satadru

    Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.

  12. The efficacy of problem solving therapy to reduce post stroke emotional distress in younger (18-65) stroke survivors.

    PubMed

    Chalmers, Charlotte; Leathem, Janet; Bennett, Simon; McNaughton, Harry; Mahawish, Karim

    2017-11-26

    To investigate the efficacy of problem solving therapy for reducing the emotional distress experienced by younger stroke survivors. A non-randomized waitlist controlled design was used to compare outcome measures for the treatment group and a waitlist control group at baseline and post-waitlist/post-therapy. After the waitlist group received problem solving therapy an analysis was completed on the pooled outcome measures at baseline, post-treatment, and three-month follow-up. Changes on outcome measures between baseline and post-treatment (n = 13) were not significantly different between the two groups, treatment (n = 13), and the waitlist control group (n = 16) (between-subject design). The pooled data (n = 28) indicated that receiving problem solving therapy significantly reduced participants levels of depression and anxiety and increased quality of life levels from baseline to follow up (within-subject design), however, methodological limitations, such as the lack of a control group reduce the validity of this finding. The between-subject results suggest that there was no significant difference between those that received problem solving therapy and a waitlist control group between baseline and post-waitlist/post-therapy. The within-subject design suggests that problem solving therapy may be beneficial for younger stroke survivors when they are given some time to learn and implement the skills into their day to day life. However, additional research with a control group is required to investigate this further. This study provides limited evidence for the provision of support groups for younger stroke survivors post stroke, however, it remains unclear about what type of support this should be. Implications for Rehabilitation Problem solving therapy is no more effective for reducing post stroke distress than a wait-list control group. Problem solving therapy may be perceived as helpful and enjoyable by younger stroke survivors. Younger stroke survivors may use the skills learnt from problem solving therapy to solve problems in their day to day lives. Younger stroke survivors may benefit from age appropriate psychological support; however, future research is needed to determine what type of support this should be.

  13. A two-dimensional composite grid numerical model based on the reduced system for oceanography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Y.F.; Browning, G.L.; Chesshire, G.

    The proper mathematical limit of a hyperbolic system with multiple time scales, the reduced system, is a system that contains no high-frequency motions and is well posed if suitable boundary conditions are chosen for the initial-boundary value problem. The composite grid method, a robust and efficient grid-generation technique that smoothly and accurately treats general irregular boundaries, is used to approximate the two-dimensional version of the reduced system for oceanography on irregular ocean basins. A change-of-variable technique that substantially increases the accuracy of the model and a method for efficiently solving the elliptic equation for the geopotential are discussed. Numerical resultsmore » are presented for circular and kidney-shaped basins by using a set of analytic solutions constructed in this paper.« less

  14. Analysis of the stability of nonlinear suspension system with slow-varying sprung mass under dual-excitation

    NASA Astrophysics Data System (ADS)

    Yao, Jun; Zhang, Jinqiu; Zhao, Mingmei; Li, Xin

    2018-07-01

    This study investigated the stability of vibration in a nonlinear suspension system with slow-varying sprung mass under dual-excitation. A mathematical model of the system was first established and then solved using the multi-scale method. Finally, the amplitude-frequency curve of vehicle vibration, the solution's stable region and time-domain curve in Hopf bifurcation were derived. The obtained results revealed that an increase in the lower excitation would reduce the system's stability while an increase in the upper excitation can make the system more stable. The slow-varying sprung mass will change the system's damping from negative to positive, leading to the appearance of limit cycle and Hopf bifurcation. As a result, the vehicle's vibration state is forced to change. The stability of this system is extremely fragile under the effect of dynamic Hopf bifurcation as well as static bifurcation.

  15. Numerical Limitations of 1D Hydraulic Models Using MIKE11 or HEC-RAS software - Case study of Baraolt River, Romania

    NASA Astrophysics Data System (ADS)

    Andrei, Armas; Robert, Beilicci; Erika, Beilicci

    2017-10-01

    MIKE 11 is an advanced hydroinformatic tool, a professional engineering software package for simulation of one-dimensional flows in estuaries, rivers, irrigation systems, channels and other water bodies. MIKE 11 is a 1-dimensional river model. It was developed by DHI Water · Environment · Health, Denmark. The basic computational procedure of HEC-RAS for steady flow is based on the solution of the one-dimensional energy equation. Energy losses are evaluated by friction and contraction / expansion. The momentum equation may be used in situations where the water surface profile is rapidly varied. These situations include hydraulic jumps, hydraulics of bridges, and evaluating profiles at river confluences. For unsteady flow, HEC-RAS solves the full, dynamic, 1-D Saint Venant Equation using an implicit, finite difference method. The unsteady flow equation solver was adapted from Dr. Robert L. Barkau’s UNET package. Fluid motion is controlled by the basic principles of conservation of mass, energy and momentum, which form the basis of fluid mechanics and hydraulic engineering. Complex flow situations must be solved using empirical approximations and numerical models, which are based on derivations of the basic principles (backwater equation, Navier-Stokes equation etc.). All numerical models are required to make some form of approximation to solve these principles, and consequently all have their limitations. The study of hydraulics and fluid mechanics is founded on the three basic principles of conservation of mass, energy and momentum. Real-life situations are frequently too complex to solve without the aid of numerical models. There is a tendency among some engineers to discard the basic principles taught at university and blindly assume that the results produced by the model are correct. Regardless of the complexity of models and despite the claims of their developers, all numerical models are required to make approximations. These may be related to geometric limitations, numerical simplification, or the use of empirical correlations. Some are obvious: one-dimensional models must average properties over the two remaining directions. It is the less obvious and poorly advertised approximations that pose the greatest threat to the novice user. Some of these, such as the inability of one-dimensional unsteady models to simulate supercritical flow can cause significant inaccuracy in the model predictions.

  16. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    The development of a computer problem solving system is reported that considers physical problems faced by an artificial robot moving around in a complex environment. Fundamental interaction constraints with a real environment are simulated for the robot by visual scan and creation of an internal environmental model. The programming system used in constructing the problem solving system for the simulated robot and its simulated world environment is outlined together with the task that the system is capable of performing. A very general framework for understanding the relationship between an observed behavior and an adequate description of that behavior is included.

  17. A method for the automated construction of the joint system of equations to solve the problem of the flow distribution in hydraulic networks

    NASA Astrophysics Data System (ADS)

    Novikov, A. E.

    1993-10-01

    There are several methods of solving the problem of the flow distribution in hydraulic networks. But all these methods have no mathematical tools for forming joint systems of equations to solve this problem. This paper suggests a method of constructing joint systems of equations to calculate hydraulic circuits of the arbitrary form. The graph concept, according to Kirchhoff, has been introduced.

  18. Social Problem Solving and Depressive Symptoms Over Time: A Randomized Clinical Trial of Cognitive Behavioral Analysis System of Psychotherapy, Brief Supportive Psychotherapy, and Pharmacotherapy

    PubMed Central

    Klein, Daniel N.; Leon, Andrew C.; Li, Chunshan; D’Zurilla, Thomas J.; Black, Sarah R.; Vivian, Dina; Dowling, Frank; Arnow, Bruce A.; Manber, Rachel; Markowitz, John C.; Kocsis, James H.

    2011-01-01

    Objective Depression is associated with poor social problem-solving, and psychotherapies that focus on problem-solving skills are efficacious in treating depression. We examined the associations between treatment, social problem solving, and depression in a randomized clinical trial testing the efficacy of psychotherapy augmentation for chronically depressed patients who failed to fully respond to an initial trial of pharmacotherapy (Kocsis et al., 2009). Method Participants with chronic depression (n = 491) received Cognitive Behavioral Analysis System of Psychotherapy (CBASP), which emphasizes interpersonal problem-solving, plus medication; Brief Supportive Psychotherapy (BSP) plus medication; or medication alone for 12 weeks. Results CBASP plus pharmacotherapy was associated with significantly greater improvement in social problem solving than BSP plus pharmacotherapy, and a trend for greater improvement in problem solving than pharmacotherapy alone. In addition, change in social problem solving predicted subsequent change in depressive symptoms over time. However, the magnitude of the associations between changes in social problem solving and subsequent depressive symptoms did not differ across treatment conditions. Conclusions It does not appear that improved social problem solving is a mechanism that uniquely distinguishes CBASP from other treatment approaches. PMID:21500885

  19. Exploring a Structure for Mathematics Lessons That Foster Problem Solving and Reasoning

    ERIC Educational Resources Information Center

    Sullivan, Peter; Walker, Nadia; Borcek, Chris; Rennie, Mick

    2015-01-01

    While there is widespread agreement on the importance of incorporating problem solving and reasoning into mathematics classrooms, there is limited specific advice on how this can best happen. This is a report of an aspect of a project that is examining the opportunities and constraints in initiating learning by posing challenging mathematics tasks…

  20. Examining the Role of Web 2.0 Tools in Supporting Problem Solving during Case-Based Instruction

    ERIC Educational Resources Information Center

    Koehler, Adrie A.; Newby, Timothy J.; Ertmer, Peggy A.

    2017-01-01

    As learners solve complex problems, such as the ones present in case narratives, they need instructional support. Potentially, Web 2.0 applications can be useful to learners during case-based instruction (CBI), as their affordances offer creative and collaborative opportunities. However, there is limited research available on how the affordances…

  1. A Test of the Circumvention-of-Limits Hypothesis in Scientific Problem Solving: The Case of Geological Bedrock Mapping

    ERIC Educational Resources Information Center

    Hambrick, David Z.; Libarkin, Julie C.; Petcovic, Heather L.; Baker, Kathleen M.; Elkins, Joe; Callahan, Caitlin N.; Turner, Sheldon P.; Rench, Tara A.; LaDue, Nicole D.

    2012-01-01

    Sources of individual differences in scientific problem solving were investigated. Participants representing a wide range of experience in geology completed tests of visuospatial ability and geological knowledge, and performed a geological bedrock mapping task, in which they attempted to infer the geological structure of an area in the Tobacco…

  2. Problems in Staff and Educational Development Leadership: Solving, Framing, and Avoiding

    ERIC Educational Resources Information Center

    Blackmore, Paul; Wilson, Andrew

    2005-01-01

    Analysis of interviews using critical incident technique with a sample of leaders in staff and educational development in higher education institutions reveals a limited use of classical problem-solving approaches. However, many leaders are able to articulate ways in which they frame problems. Framing has to do with goals, which may be complex,…

  3. Exploring the Cognitive Demand and Features of Problem Solving Tasks in Primary Mathematics Classrooms

    ERIC Educational Resources Information Center

    McCormick, Melody

    2016-01-01

    Student learning is greatest in classrooms where students engage in problem solving tasks that are cognitively demanding. However, there are growing concerns that many Australian students are given limited opportunities to engage in these types of tasks. 108 upper primary school teachers were surveyed to examine task features and cognitive demand…

  4. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less

  5. Ontological Problem-Solving Framework for Assigning Sensor Systems and Algorithms to High-Level Missions

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

  6. [The male factor in childless marriage problem-solving strategies].

    PubMed

    Bozhedomov, V A

    2016-03-01

    This paper proposes a three-level care system for men from involuntarily childless couples. The proposal is based on the experience of federal and regional clinics of urology and gynecology, respective departments for postgraduate education and on the analysis of scientific literature. Using three-stage comprehensive prevention of male infertility factor and recurrent pregnancy loss is substantiated. Up-to-date requirements for equipping andrology laboratories and testing sperm quality are outlined. The prospects and limitations of surgical and medical treatment modalities and assisted reproductive technologies are described.

  7. Environmental protection in Italy: the emerging concept of a right to a healthful environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patti, S.

    1984-07-01

    Italy's concepts of private law limit the possibilities for environmental protection. The failure to use available public law effectively and the failure of other governments to solve the problem with constitutional changes, emphasizes the need to establish an effective legal means within the existing constitutional structure. A recent approach draws on the right of the individual to a healthful environment, but whether this succeeds in protecting the environment depends, to a large degree, on the ability of Italians to overcome a system characterized by economic individualism. 40 references.

  8. Constrained optimization of sequentially generated entangled multiqubit states

    NASA Astrophysics Data System (ADS)

    Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique

    2009-08-01

    We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.

  9. Studies of hypokinesia in animals to solve urgent problems of space biology and medicine

    NASA Technical Reports Server (NTRS)

    Baranski, S.; Bodya, K.; Reklevska, V.; Tomashevska, L.; Gayevskaya, M. S.; Ilina-Kakuyeva, Y. I.; Katsyuba-Ustiko, G.; Kovalenko, Y. A.; Kurkina, L. M.; Mailyan, E. S.

    1974-01-01

    The effects of hypokinesia on animals were studied by observing: (1) hormonal and mediator balance of the body; (2) gas exchange and tissue respiration; (3) protein content in skeletal muscles; (4) structure of skeletal muscles; and (5) function of skeletal muscles. Sharp limitation of motor activity causes interconnected processes of a dystropic and pathological character expressed as a reduction in the force of various muscle group with disturbance of velocity properties and motor coordination due to disturbances in the control link of the neuromuscular system.

  10. Fine structure of spectrum of twist-three operators in QCD

    NASA Astrophysics Data System (ADS)

    Belitsky, A. V.

    1999-04-01

    We unravel the structure of the spectrum of the anomalous dimensions of the quark-gluon twist-3 operators which are responsible for the multiparton correlations in hadrons and enter as a leading contribution to several physical cross sections. The method of analysis is based on the recent finding of a non-trivial integral of motion for the corresponding Hamiltonian problem in multicolour limit which results into exact integrability of the three-particle system. Quasiclassical expansion is used for solving the problem. We address the chiral-odd sector as a case of study.

  11. The P.A.C.E.S. Grading Rubric: Creating a Student-Owned Assessment Tool for Projects-The Design Brief Brings out All Kinds of "Out of the Box" Thinking, with Many Correct Answers to Solve the Problem

    ERIC Educational Resources Information Center

    Tufte, Robert B., Jr.

    2005-01-01

    P.A.C.E.S. stands for Participation, Appearance, Cleanup, Engineering, and Safety. The author has traditionally used design briefs to set the limits on processes and materials to solve a given problem. The design brief brings out all kinds of "out of the box" thinking, with many correct answers to solve the problem. The P.A.C.E.S. rubric ties the…

  12. The Future of Electronic Device Design: Device and Process Simulation Find Intelligence on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.

    1999-01-01

    We are on the path to meet the major challenges ahead for TCAD (technology computer aided design). The emerging computational grid will ultimately solve the challenge of limited computational power. The Modular TCAD Framework will solve the TCAD software challenge once TCAD software developers realize that there is no other way to meet industry's needs. The modular TCAD framework (MTF) also provides the ideal platform for solving the TCAD model challenge by rapid implementation of models in a partial differential solver.

  13. Tribological advancements for reliable wind turbine performance.

    PubMed

    Kotzalas, Michael N; Doll, Gary L

    2010-10-28

    Wind turbines have had various limitations to their mechanical system reliability owing to tribological problems over the past few decades. While several studies show that turbines are becoming more reliable, it is still not at an overall acceptable level to the operators based on their current business models. Data show that the electrical components are the most problematic; however, the parts are small, thus easy and inexpensive to replace in the nacelle, on top of the tower. It is the tribological issues that receive the most attention as they have higher costs associated with repair or replacement. These include the blade pitch systems, nacelle yaw systems, main shaft bearings, gearboxes and generator bearings, which are the focus of this review paper. The major tribological issues in wind turbines and the technological developments to understand and solve them are discussed within. The study starts with an overview of fretting corrosion, rolling contact fatigue, and frictional torque of the blade pitch and nacelle yaw bearings, and references to some of the recent design approaches applied to solve them. Also included is a brief overview into lubricant contamination issues in the gearbox and electric current discharge or arcing damage of the generator bearings. The primary focus of this review is the detailed examination of main shaft spherical roller bearing micropitting and gearbox bearing scuffing, micropitting and the newer phenomenon of white-etch area flaking. The main shaft and gearbox are integrally related and are the most commonly referred to items involving expensive repair costs and downtime. As such, the latest research and developments related to the cause of the wear and damage modes and the technologies used or proposed to solve them are presented.

  14. Dithering Digital Ripple Correlation Control for Photovoltaic Maximum Power Point Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barth, C; Pilawa-Podgurski, RCN

    This study demonstrates a new method for rapid and precise maximum power point tracking in photovoltaic (PV) applications using dithered PWM control. Constraints imposed by efficiency, cost, and component size limit the available PWM resolution of a power converter, and may in turn limit the MPP tracking efficiency of the PV system. In these scenarios, PWM dithering can be used to improve average PWM resolution. In this study, we present a control technique that uses ripple correlation control (RCC) on the dithering ripple, thereby achieving simultaneous fast tracking speed and high tracking accuracy. Moreover, the proposed method solves some ofmore » the practical challenges that have to date limited the effectiveness of RCC in solar PV applications. We present a theoretical derivation of the principles behind dithering digital ripple correlation control, as well as experimental results that show excellent tracking speed and accuracy with basic hardware requirements.« less

  15. Are there signature limits in early theory of mind?

    PubMed

    Fizke, Ella; Butterfill, Stephen; van de Loo, Lea; Reindl, Eva; Rakoczy, Hannes

    2017-10-01

    Current theory-of-mind research faces the challenge of reconciling two sets of seemingly incompatible findings: Whereas children come to solve explicit verbal false belief (FB) tasks from around 4years of age, recent studies with various less explicit measures such as looking time, anticipatory looking, and spontaneous behavior suggest that even infants can succeed on some FB tasks. In response to this tension, two-systems theories propose to distinguish between an early-developing system, tracking simple forms of mental states, and a later-developing system, based on fully developed concepts of belief and other propositional attitudes. One prediction of such theories is that the early-developing system has signature limits concerning aspectuality. We tested this prediction in two experiments. The first experiment showed (in line with previous findings) that 2- and 3-year-olds take into account a protagonist's true or false belief about the location of an object in their active helping behavior. In contrast, toddlers' helping behavior did not differentiate between true and false belief conditions when the protagonist's belief essentially involved aspectuality. Experiment 2 replicated these findings with a more stringent method designed to rule out more parsimonious explanations. Taken together, the current findings are compatible with the possibility that early theory-of-mind reasoning is subject to signature limits as predicted by the two-systems account. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. A Flexible CUDA LU-based Solver for Small, Batched Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste

    This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less

  17. Semi-implicit integration factor methods on sparse grids for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-07-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.

  18. Experimental quantum computing to solve systems of linear equations.

    PubMed

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  19. Runge-Kutta discontinuous Galerkin method using a new type of WENO limiters on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Zhong, Xinghui; Shu, Chi-Wang; Qiu, Jianxian

    2013-09-01

    In this paper we generalize a new type of limiters based on the weighted essentially non-oscillatory (WENO) finite volume methodology for the Runge-Kutta discontinuous Galerkin (RKDG) methods solving nonlinear hyperbolic conservation laws, which were recently developed in [32] for structured meshes, to two-dimensional unstructured triangular meshes. The key idea of such limiters is to use the entire polynomials of the DG solutions from the troubled cell and its immediate neighboring cells, and then apply the classical WENO procedure to form a convex combination of these polynomials based on smoothness indicators and nonlinear weights, with suitable adjustments to guarantee conservation. The main advantage of this new limiter is its simplicity in implementation, especially for the unstructured meshes considered in this paper, as only information from immediate neighbors is needed and the usage of complicated geometric information of the meshes is largely avoided. Numerical results for both scalar equations and Euler systems of compressible gas dynamics are provided to illustrate the good performance of this procedure.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goltz, M.N.; Oxley, M.E.

    Aquifer cleanup efforts at contaminated sites frequently involve operation of a system of extraction wells. It has been found that contaminant load discharged by extraction wells typically declines with time, asymptotically approaching a residual level. Such behavior could be due to rate-limited desorption of an organic contaminant from aquifer solids. An analytical model is presented which accounts for rate-limited desorption of an organic solute during cleanup of a contaminated site. Model equations are presented which describe transport of a sorbing contaminant in a converging radial flow field, with sorption described by (1) equilibrium, (2) first-order rate, and (3) Fickian diffusionmore » expressions. The model equations are solved in the Laplace domain and numerically inverted to simulate contaminant concentrations at an extraction well. A Laplace domain solution for the total contaminant mass remaining in the aquifer is also derived. It is shown that rate-limited sorption can have a significant impact upon aquifer remediation. Approximate equivalence among the various rate-limited models is also demonstrated.« less

  1. Mathematics of the total alkalinity-pH equation - pathway to robust and universal solution algorithms: the SolveSAPHE package v1.0.1

    NASA Astrophysics Data System (ADS)

    Munhoven, G.

    2013-08-01

    The total alkalinity-pH equation, which relates total alkalinity and pH for a given set of total concentrations of the acid-base systems that contribute to total alkalinity in a given water sample, is reviewed and its mathematical properties established. We prove that the equation function is strictly monotone and always has exactly one positive root. Different commonly used approximations are discussed and compared. An original method to derive appropriate initial values for the iterative solution of the cubic polynomial equation based upon carbonate-borate-alkalinity is presented. We then review different methods that have been used to solve the total alkalinity-pH equation, with a main focus on biogeochemical models. The shortcomings and limitations of these methods are made out and discussed. We then present two variants of a new, robust and universally convergent algorithm to solve the total alkalinity-pH equation. This algorithm does not require any a priori knowledge of the solution. SolveSAPHE (Solver Suite for Alkalinity-PH Equations) provides reference implementations of several variants of the new algorithm in Fortran 90, together with new implementations of other, previously published solvers. The new iterative procedure is shown to converge from any starting value to the physical solution. The extra computational cost for the convergence security is only 10-15% compared to the fastest algorithm in our test series.

  2. On the use of flux limiters in the discrete ordinates method for 3D radiation calculations in absorbing and scattering media

    NASA Astrophysics Data System (ADS)

    Godoy, William F.; DesJardin, Paul E.

    2010-05-01

    The application of flux limiters to the discrete ordinates method (DOM), SN, for radiative transfer calculations is discussed and analyzed for 3D enclosures for cases in which the intensities are strongly coupled to each other such as: radiative equilibrium and scattering media. A Newton-Krylov iterative method (GMRES) solves the final systems of linear equations along with a domain decomposition strategy for parallel computation using message passing libraries in a distributed memory system. Ray effects due to angular discretization and errors due to domain decomposition are minimized until small variations are introduced by these effects in order to focus on the influence of flux limiters on errors due to spatial discretization, known as numerical diffusion, smearing or false scattering. Results are presented for the DOM-integrated quantities such as heat flux, irradiation and emission. A variety of flux limiters are compared to "exact" solutions available in the literature, such as the integral solution of the RTE for pure absorbing-emitting media and isotropic scattering cases and a Monte Carlo solution for a forward scattering case. Additionally, a non-homogeneous 3D enclosure is included to extend the use of flux limiters to more practical cases. The overall balance of convergence, accuracy, speed and stability using flux limiters is shown to be superior compared to step schemes for any test case.

  3. Artificial intelligence within the chemical laboratory.

    PubMed

    Winkel, P

    1994-01-01

    Various techniques within the area of artificial intelligence such as expert systems and neural networks may play a role during the problem-solving processes within the clinical biochemical laboratory. Neural network analysis provides a non-algorithmic approach to information processing, which results in the ability of the computer to form associations and to recognize patterns or classes among data. It belongs to the machine learning techniques which also include probabilistic techniques such as discriminant function analysis and logistic regression and information theoretical techniques. These techniques may be used to extract knowledge from example patients to optimize decision limits and identify clinically important laboratory quantities. An expert system may be defined as a computer program that can give advice in a well-defined area of expertise and is able to explain its reasoning. Declarative knowledge consists of statements about logical or empirical relationships between things. Expert systems typically separate declarative knowledge residing in a knowledge base from the inference engine: an algorithm that dynamically directs and controls the system when it searches its knowledge base. A tool is an expert system without a knowledge base. The developer of an expert system uses a tool by entering knowledge into the system. Many, if not the majority of problems encountered at the laboratory level are procedural. A problem is procedural if it is possible to write up a step-by-step description of the expert's work or if it can be represented by a decision tree. To solve problems of this type only small expert system tools and/or conventional programming are required.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. GIS as a tool for efficient management of transport streams

    NASA Astrophysics Data System (ADS)

    Zatserkovnyi, V. I.; Kobrin, O. V.

    2015-10-01

    The transport network, which is an ideal object for the automation and the increase of efficiency using geographic information systems (GIS), is considered. The transport problems, which have a lot of mathematical models of the traffic flow for their solution, are enumerated. GIS analysis tools that allow one to build optimal routes in the real road network with its capabilities and limitations are presented. They can solve the extremely important problem of modern Ukraine - the rapid increase of the number of cars and the glut of road network vehicles. The intelligent transport systems, which are created and developed on the basis of GPS, GIS, modern communications and telecommunications facilities, are considered.

  5. Liposomal curcumin and its application in cancer

    PubMed Central

    Lee, Robert J; Zhao, Ling

    2017-01-01

    Curcumin (CUR) is a yellow polyphenolic compound derived from the plant turmeric. It is widely used to treat many types of diseases, including cancers such as those of lung, cervices, prostate, breast, bone and liver. However, its effectiveness has been limited due to poor aqueous solubility, low bioavailability and rapid metabolism and systemic elimination. To solve these problems, researchers have tried to explore novel drug delivery systems such as liposomes, solid dispersion, microemulsion, micelles, nanogels and dendrimers. Among these, liposomes have been the most extensively studied. Liposomal CUR formulation has greater growth inhibitory and pro-apoptotic effects on cancer cells. This review mainly focuses on the preparation of liposomes containing CUR and its use in cancer therapy. PMID:28860764

  6. New design and operating techniques and requirements for improved aircraft terminal area operations

    NASA Technical Reports Server (NTRS)

    Reeder, J. P.; Taylor, R. T.; Walsh, T. M.

    1974-01-01

    Current aircraft operating problems that must be alleviated for future high-density terminal areas are safety, dependence on weather, congestion, energy conservation, noise, and atmospheric pollution. The Microwave Landing System (MLS) under development by FAA provides increased capabilities over the current ILS. The development of the airborne system's capability to take maximum advantage of the MLS capabilities in order to solve terminal area problems are discussed. A major limiting factor in longitudinal spacing for capacity increase is the trailing vortex hazard. Promising methods for causing early dissipation of the vortices were explored. Flight procedures for avoiding the hazard were investigated. Terminal configured vehicles and their flight test development are discussed.

  7. Liposomal curcumin and its application in cancer.

    PubMed

    Feng, Ting; Wei, Yumeng; Lee, Robert J; Zhao, Ling

    2017-01-01

    Curcumin (CUR) is a yellow polyphenolic compound derived from the plant turmeric. It is widely used to treat many types of diseases, including cancers such as those of lung, cervices, prostate, breast, bone and liver. However, its effectiveness has been limited due to poor aqueous solubility, low bioavailability and rapid metabolism and systemic elimination. To solve these problems, researchers have tried to explore novel drug delivery systems such as liposomes, solid dispersion, microemulsion, micelles, nanogels and dendrimers. Among these, liposomes have been the most extensively studied. Liposomal CUR formulation has greater growth inhibitory and pro-apoptotic effects on cancer cells. This review mainly focuses on the preparation of liposomes containing CUR and its use in cancer therapy.

  8. STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python

    PubMed Central

    Wils, Stefan; Schutter, Erik De

    2008-01-01

    We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245

  9. The developing one door licensing service system based on RESTful oriented services and MVC framework

    NASA Astrophysics Data System (ADS)

    Widiyanto, Sigit; Setyawan, Aris Budi; Tarigan, Avinanta; Sussanto, Herry

    2016-02-01

    The increase of the number of business impact on the increasing service requirements for companies and Small Medium Enterprises (SMEs) in submitting their license request. The service system that is needed must be able to accommodate a large number of documents, various institutions, and time limitations of applicant. In addition, it is also required distributed applications which is able to be integrated each other. Service oriented application fits perfectly developed along client-server application which has been developed by the Government to digitalize submitted data. RESTful architecture and MVC framework are embedded in developing application. As a result, the application proves its capability in solving security, transaction speed, and data accuracy issues.

  10. Solving the linear inviscid shallow water equations in one dimension, with variable depth, using a recursion formula

    NASA Astrophysics Data System (ADS)

    Hernandez-Walls, R.; Martín-Atienza, B.; Salinas-Matus, M.; Castillo, J.

    2017-11-01

    When solving the linear inviscid shallow water equations with variable depth in one dimension using finite differences, a tridiagonal system of equations must be solved. Here we present an approach, which is more efficient than the commonly used numerical method, to solve this tridiagonal system of equations using a recursion formula. We illustrate this approach with an example in which we solve for a rectangular channel to find the resonance modes. Our numerical solution agrees very well with the analytical solution. This new method is easy to use and understand by undergraduate students, so it can be implemented in undergraduate courses such as Numerical Methods, Lineal Algebra or Differential Equations.

  11. Resource Balancing Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc

    2010-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.

  12. Modeling and simulation of a hybrid ship power system

    NASA Astrophysics Data System (ADS)

    Doktorcik, Christopher J.

    2011-12-01

    Optimizing the performance of naval ship power systems requires integrated design and coordination of the respective subsystems (sources, converters, and loads). A significant challenge in the system-level integration is solving the Power Management Control Problem (PMCP). The PMCP entails deciding on subsystem power usages for achieving a trade-off between the error in tracking a desired position/velocity profile, minimizing fuel consumption, and ensuring stable system operation, while at the same time meeting performance limitations of each subsystem. As such, the PMCP naturally arises at a supervisory level of a ship's operation. In this research, several critical steps toward the solution of the PMCP for surface ships have been undertaken. First, new behavioral models have been developed for gas turbine engines, wound rotor synchronous machines, DC super-capacitors, induction machines, and ship propulsion systems. Conventional models describe system inputs and outputs in terms of physical variables such as voltage, current, torque, and force. In contrast, the behavioral models developed herein express system inputs and outputs in terms of power whenever possible. Additionally, the models have been configured to form a hybrid system-level power model (HSPM) of a proposed ship electrical architecture. Lastly, several simulation studies have been completed to expose the capabilities and limitations of the HSPM.

  13. Systems metabolic engineering of microorganisms to achieve large-scale production of flavonoid scaffolds.

    PubMed

    Wu, Junjun; Du, Guocheng; Zhou, Jingwen; Chen, Jian

    2014-10-20

    Flavonoids possess pharmaceutical potential due to their health-promoting activities. The complex structures of these products make extraction from plants difficult, and chemical synthesis is limited because of the use of many toxic solvents. Microbial production offers an alternate way to produce these compounds on an industrial scale in a more economical and environment-friendly manner. However, at present microbial production has been achieved only on a laboratory scale and improvements and scale-up of these processes remain challenging. Naringenin and pinocembrin, which are flavonoid scaffolds and precursors for most of the flavonoids, are the model molecules that are key to solving the current issues restricting industrial production of these chemicals. The emergence of systems metabolic engineering, which combines systems biology with synthetic biology and evolutionary engineering at the systems level, offers new perspectives on strain and process optimization. In this review, current challenges in large-scale fermentation processes involving flavonoid scaffolds and the strategies and tools of systems metabolic engineering used to overcome these challenges are summarized. This will offer insights into overcoming the limitations and challenges of large-scale microbial production of these important pharmaceutical compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Human factors involvement in bringing the power of AI to a heterogeneous user population

    NASA Technical Reports Server (NTRS)

    Czerwinski, Mary; Nguyen, Trung

    1994-01-01

    The Human Factors involvement in developing COMPAQ QuickSolve, an electronic problem-solving and information system for Compaq's line of networked printers, is described. Empowering customers with expert system technology so they could solve advanced networked printer problems on their own was a major goal in designing this system. This process would minimize customer down-time, reduce the number of phone calls to the Compaq Customer Support Center, improve customer satisfaction, and, most importantly, differentiate Compaq printers in the marketplace by providing the best, and most technologically advanced, customer support. This represents a re-engineering of Compaq's customer support strategy and implementation. In its first generation system, SMART, the objective was to provide expert knowledge to Compaq's help desk operation to more quickly and correctly answer customer questions and problems. QuickSolve is a second generation system in that customer support is put directly in the hands of the consumers. As a result, the design of QuickSolve presented a number of challenging issues. Because the produce would be used by a diverse and heterogeneous set of users, a significant amount of human factors research and analysis was required while designing and implementing the system. Research that shaped the organization and design of the expert system component as well.

  15. More than just fun and games: the longitudinal relationships between strategic video games, self-reported problem solving skills, and academic grades.

    PubMed

    Adachi, Paul J C; Willoughby, Teena

    2013-07-01

    Some researchers have proposed that video games possess good learning principles and may promote problem solving skills. Empirical research regarding this relationship, however, is limited. The goal of the presented study was to examine whether strategic video game play (i.e., role playing and strategy games) predicted self-reported problem solving skills among a sample of 1,492 adolescents (50.8 % female), over the four high school years. The results showed that more strategic video game play predicted higher self-reported problem solving skills over time than less strategic video game play. In addition, the results showed support for an indirect association between strategic video game play and academic grades, in that strategic video game play predicted higher self-reported problem solving skills, and, in turn, higher self-reported problem solving skills predicted higher academic grades. The novel findings that strategic video games promote self-reported problem solving skills and indirectly predict academic grades are important considering that millions of adolescents play video games every day.

  16. Solving Nonlinear Differential Equations in the Engineering Curriculum

    ERIC Educational Resources Information Center

    Auslander, David M.

    1977-01-01

    Described is the Dynamic System Simulation Language (SIM) mini-computer system utilized at the University of California, Los Angeles. It is used by engineering students for solving nonlinear differential equations. (SL)

  17. Problem solving using soft systems methodology.

    PubMed

    Land, L

    This article outlines a method of problem solving which considers holistic solutions to complex problems. Soft systems methodology allows people involved in the problem situation to have control over the decision-making process.

  18. Efficient dual approach to distance metric learning.

    PubMed

    Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton

    2014-02-01

    Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.

  19. Chance-Constrained System of Systems Based Operation of Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargarian, Amin; Fu, Yong; Wu, Hongyu

    In this paper, a chance-constrained system of systems (SoS) based decision-making approach is presented for stochastic scheduling of power systems encompassing active distribution grids. Based on the concept of SoS, the independent system operator (ISO) and distribution companies (DISCOs) are modeled as self-governing systems. These systems collaborate with each other to run the entire power system in a secure and economic manner. Each self-governing system accounts for its local reserve requirements and line flow constraints with respect to the uncertainties of load and renewable energy resources. A set of chance constraints are formulated to model the interactions between the ISOmore » and DISCOs. The proposed model is solved by using analytical target cascading (ATC) method, a distributed optimization algorithm in which only a limited amount of information is exchanged between collaborative ISO and DISCOs. In this paper, a 6-bus and a modified IEEE 118-bus power systems are studied to show the effectiveness of the proposed algorithm.« less

  20. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  1. A general strategy to solve the phase problem in RNA crystallography

    PubMed Central

    Keel, Amanda Y.; Rambo, Robert P.; Batey, Robert T.; Kieft, Jeffrey S.

    2007-01-01

    SUMMARY X-ray crystallography of biologically important RNA molecules has been hampered by technical challenges, including finding a heavy-atom derivative to obtain high-quality experimental phase information. Existing techniques have drawbacks, severely limiting the rate at which important new structures are solved. To address this need, we have developed a reliable means to localize heavy atoms specifically to virtually any RNA. By solving the crystal structures of thirteen variants of the G·U wobble pair cation binding motif we have identified an optimal version that when inserted into an RNA helix introduces a high-occupancy cation binding site suitable for phasing. This “directed soaking” strategy can be integrated fully into existing RNA and crystallography methods, potentially increasing the rate at which important structures are solved and facilitating routine solving of structures using Cu-Kα radiation. The success of this method has been proven in that it has already been used to solve several novel crystal structures. PMID:17637337

  2. Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving.

    PubMed

    Semeniuk, Yulia Yuriyivna; Brown, Roger L; Riesch, Susan K

    2016-07-01

    We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem-solving skill. The intervention is based on the Circumplex Model and Social Problem-Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem-Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. © The Author(s) 2016.

  3. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    NASA Astrophysics Data System (ADS)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  4. Solving the detour problem in navigation: a model of prefrontal and hippocampal interactions.

    PubMed

    Spiers, Hugo J; Gilbert, Sam J

    2015-01-01

    Adapting behavior to accommodate changes in the environment is an important function of the nervous system. A universal problem for motile animals is the discovery that a learned route is blocked and a detour is required. Given the substantial neuroscience research on spatial navigation and decision-making it is surprising that so little is known about how the brain solves the detour problem. Here we review the limited number of relevant functional neuroimaging, single unit recording and lesion studies. We find that while the prefrontal cortex (PFC) consistently responds to detours, the hippocampus does not. Recent evidence suggests the hippocampus tracks information about the future path distance to the goal. Based on this evidence we postulate a conceptual model in which: Lateral PFC provides a prediction error signal about the change in the path, frontopolar and superior PFC support the re-formulation of the route plan as a novel subgoal and the hippocampus simulates the new path. More data will be required to validate this model and understand (1) how the system processes the different options; and (2) deals with situations where a new path becomes available (i.e., shortcuts).

  5. Strengthened MILP formulation for certain gas turbine unit commitment problems

    DOE PAGES

    Pan, Kai; Guan, Yongpei; Watson, Jean -Paul; ...

    2015-05-22

    In this study, we derive a strengthened MILP formulation for certain gas turbine unit commitment problems, in which the ramping rates are no smaller than the minimum generation amounts. This type of gas turbines can usually start-up faster and have a larger ramping rate, as compared to the traditional coal-fired power plants. Recently, the number of this type of gas turbines increases significantly due to affordable gas prices and their scheduling flexibilities to accommodate intermittent renewable energy generation. In this study, several new families of strong valid inequalities are developed to help reduce the computational time to solve these typesmore » of problems. Meanwhile, the validity and facet-defining proofs are provided for certain inequalities. Finally, numerical experiments on a modified IEEE 118-bus system and the power system data based on recent studies verify the effectiveness of applying our formulation to model and solve this type of gas turbine unit commitment problems, including reducing the computational time to obtain an optimal solution or obtaining a much smaller optimality gap, as compared to the default CPLEX, when the time limit is reached with no optimal solutions obtained.« less

  6. Application of the Fourier pseudospectral time-domain method in orthogonal curvilinear coordinates for near-rigid moderately curved surfaces.

    PubMed

    Hornikx, Maarten; Dragna, Didier

    2015-07-01

    The Fourier pseudospectral time-domain method is an efficient wave-based method to model sound propagation in inhomogeneous media. One of the limitations of the method for atmospheric sound propagation purposes is its restriction to a Cartesian grid, confining it to staircase-like geometries. A transform from the physical coordinate system to the curvilinear coordinate system has been applied to solve more arbitrary geometries. For applicability of this method near the boundaries, the acoustic velocity variables are solved for their curvilinear components. The performance of the curvilinear Fourier pseudospectral method is investigated in free field and for outdoor sound propagation over an impedance strip for various types of shapes. Accuracy is shown to be related to the maximum grid stretching ratio and deformation of the boundary shape and computational efficiency is reduced relative to the smallest grid cell in the physical domain. The applicability of the curvilinear Fourier pseudospectral time-domain method is demonstrated by investigating the effect of sound propagation over a hill in a nocturnal boundary layer. With the proposed method, accurate and efficient results for sound propagation over smoothly varying ground surfaces with high impedances can be obtained.

  7. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  8. The VLBI Data Analysis Software νSolve: Development Progress and Plans for the Future

    NASA Astrophysics Data System (ADS)

    Bolotin, S.; Baver, K.; Gipson, J.; Gordon, D.; MacMillan, D.

    2014-12-01

    The program νSolve is a part of the CALC/SOLVE VLBI data analysis system. It is a replacement for interactive SOLVE, the part of CALC/SOLVE that is used for preliminary data analysis of new VLBI sessions. νSolve is completely new software. It is written in C++ and has a modern graphical user interface. In this article we present the capabilities of the software, its current status, and our plans for future development.

  9. Human factors evaluations of Free Flight Issues solved and issues remaining.

    PubMed

    Ruigrok, Rob C J; Hoekstra, Jacco M

    2007-07-01

    The Dutch National Aerospace Laboratory (NLR) has conducted extensive human-in-the-loop simulation experiments in NLR's Research Flight Simulator (RFS), focussed on human factors evaluation of Free Flight. Eight years of research, in co-operation with partners in the United States and Europe, has shown that Free Flight has the potential to increase airspace capacity by at least a factor of 3. Expected traffic loads and conflict rates for the year 2020 appear to be no major problem for professional airline crews participating in flight simulation experiments. Flight efficiency is significantly improved by user-preferred routings, including cruise climbs, while pilot workload is only slightly increased compared to today's reference. Detailed results from three projects and six human-in-the-loop experiments in NLR's Research Flight Simulator are reported. The main focus of these results is on human factors issues and particularly workload, measured both subjectively and objectively. An extensive discussion is included on many human factors issues resolved during the experiments, but also open issues are identified. An intent-based Conflict Detection and Resolution (CD&R) system provides "benefits" in terms of reduced pilot workload, but also "costs" in terms of complexity, need for priority rules, potential compatibility problems between different brands of Flight Management Systems and large bandwidth. Moreover, the intent-based system is not effective at solving multi-aircraft conflicts. A state-based CD&R system also provides "benefits" and "costs". Benefits compared to the full intent-based system are simplicity, low bandwidth requirements, easy to retrofit (no requirements to change avionics infrastructure) and the ability to solve multi-aircraft conflicts in parallel. The "costs" involve a somewhat higher pilot workload in similar circumstances, the smaller look-ahead time which results in less efficient resolution manoeuvres and the sometimes false/nuisance alerts due to missing intent information. The optimal CD&R system (in terms of costs versus benefits) has been suggested to be state-based CD&R with the addition of intended or target flight level. This combination of state-based CD&R with a limited amount of intent provides "the best of both worlds". Studying this CD&R system is still an open issue.

  10. DORA-II Technical Adequacy Brief: Measuring the Process and Outcomes of Team Problem Solving

    ERIC Educational Resources Information Center

    Algozzine, Bob; Horner, Robert H.; Todd, Anne W.; Newton, J. Stephen; Algozzine, Kate; Cusumano, Dale

    2014-01-01

    School teams regularly meet to review academic and social problems of individual students, groups of students, or their school in general. While the need for problem solving and recommendations for how to do it are widely documented, there is very limited evidence reflecting the extent to which teams effectively engage in a systematic or effective…

  11. More than Just Fun and Games: The Longitudinal Relationships between Strategic Video Games, Self-Reported Problem Solving Skills, and Academic Grades

    ERIC Educational Resources Information Center

    Adachi, Paul J. C.; Willoughby, Teena

    2013-01-01

    Some researchers have proposed that video games possess good learning principles and may promote problem solving skills. Empirical research regarding this relationship, however, is limited. The goal of the presented study was to examine whether strategic video game play (i.e., role playing and strategy games) predicted self-reported problem…

  12. Methodes de decomposition pour la planification a moyen terme de la production hydroelectrique sous incertitude

    NASA Astrophysics Data System (ADS)

    Carpentier, Pierre-Luc

    In this thesis, we consider the midterm production planning problem (MTPP) of hydroelectricity generation under uncertainty. The aim of this problem is to manage a set of interconnected hydroelectric reservoirs over several months. We are particularly interested in high dimensional reservoir systems that are operated by large hydroelectricity producers such as Hydro-Quebec. The aim of this thesis is to develop and evaluate different decomposition methods for solving the MTPP under uncertainty. This thesis is divided in three articles. The first article demonstrates the applicability of the progressive hedging algorithm (PHA), a scenario decomposition method, for managing hydroelectric reservoirs with multiannual storage capacity under highly variable operating conditions in Canada. The PHA is a classical stochastic optimization method designed to solve general multistage stochastic programs defined on a scenario tree. This method works by applying an augmented Lagrangian relaxation on non-anticipativity constraints (NACs) of the stochastic program. At each iteration of the PHA, a sequence of subproblems must be solved. Each subproblem corresponds to a deterministic version of the original stochastic program for a particular scenario in the scenario tree. Linear and a quadratic terms must be included in subproblem's objective functions to penalize any violation of NACs. An important limitation of the PHA is due to the fact that the number of subproblems to be solved and the number of penalty terms increase exponentially with the branching level in the tree. This phenomenon can make the application of the PHA particularly difficult when the scenario tree covers several tens of time periods. Another important limitation of the PHA is caused by the fact that the difficulty level of NACs generally increases as the variability of scenarios increases. Consequently, applying the PHA becomes particularly challenging in hydroclimatic regions that are characterized by a high level of seasonal and interannual variability. These two types of limitations can slow down the algorithm's convergence rate and increase the running time per iteration. In this study, we apply the PHA on Hydro-Quebec's power system over a 92-week planning horizon. Hydrologic uncertainty is represented by a scenario tree containing 6 branching stages and 1,635 nodes. The PHA is especially well-suited for this particular application given that the company already possess a deterministic optimization model to solve the MTPP. The second article presents a new approach which enhances the performance of the PHA for solving general Mstochastic programs. The proposed method works by applying a multiscenario decomposition scheme on the stochastic program. Our heuristic method aims at constructing an optimal partition of the scenario set by minimizing the number of NACs on which an augmented Lagrangean relaxation must be applied. Each subproblem is a stochastic program defined on a group of scenarios. NACs linking scenarios sharing a common group are represented implicitly in subproblems by using a group-node system index instead of the traditional scenario-time index system. Only the NACs that link the different scenario groups are represented explicitly and relaxed. The proposed method is evaluated numerically on an hydroelectric reservoir management problem in Quebec. The results of this experiment show that our method has several advantages. Firstly, it allows to reduce the running time per iteration of the PHA by reducing the number of penalty terms that are included in the objective function and by reducing the amount of duplicated constraints and variables. In turn, this allows to reduce the running time per iteration of the algorithm. Secondly, it allows to increase the algorithm's convergence rate by reducing the variability of intermediary solutions at duplicated tree nodes. Thirdly, our approach reduces the amount of random-access memory (RAM) required for storing Lagrange multipliers associated with relaxed NACs. The third article presents an extension of the L-Shaped method designed specifically for managing hydroelectric reservoir systems with a high storage capacity. The method proposed in this paper enables to consider a higher branching level than conventional decomposition method enables. To achieve this, we assume that the stochastic process driving random parameters has a memory loss at time period t = tau. Because of this assumption, the scenario tree possess a special symmetrical structure at the second stage (t > tau). We exploit this feature using a two-stage Benders decomposition method. Each decomposition stage covers several consecutive time periods. The proposed method works by constructing a convex and piecewise linear recourse function that represents the expected cost at the second stage in the master problem. The subproblem and the master problem are stochastic program defined on scenario subtrees and can be solved using a conventional decomposition method or directly. We test the proposed method on an hydroelectric power system in Quebec over a 104-week planning horizon. (Abstract shortened by UMI.).

  13. Social problem-solving among adolescents treated for depression.

    PubMed

    Becker-Weidman, Emily G; Jacobs, Rachel H; Reinecke, Mark A; Silva, Susan G; March, John S

    2010-01-01

    Studies suggest that deficits in social problem-solving may be associated with increased risk of depression and suicidality in children and adolescents. It is unclear, however, which specific dimensions of social problem-solving are related to depression and suicidality among youth. Moreover, rational problem-solving strategies and problem-solving motivation may moderate or predict change in depression and suicidality among children and adolescents receiving treatment. The effect of social problem-solving on acute treatment outcomes were explored in a randomized controlled trial of 439 clinically depressed adolescents enrolled in the Treatment for Adolescents with Depression Study (TADS). Measures included the Children's Depression Rating Scale-Revised (CDRS-R), the Suicidal Ideation Questionnaire--Grades 7-9 (SIQ-Jr), and the Social Problem-Solving Inventory-Revised (SPSI-R). A random coefficients regression model was conducted to examine main and interaction effects of treatment and SPSI-R subscale scores on outcomes during the 12-week acute treatment stage. Negative problem orientation, positive problem orientation, and avoidant problem-solving style were non-specific predictors of depression severity. In terms of suicidality, avoidant problem-solving style and impulsiveness/carelessness style were predictors, whereas negative problem orientation and positive problem orientation were moderators of treatment outcome. Implications of these findings, limitations, and directions for future research are discussed. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Sparse learning of stochastic dynamical equations

    NASA Astrophysics Data System (ADS)

    Boninsegna, Lorenzo; Nüske, Feliks; Clementi, Cecilia

    2018-06-01

    With the rapid increase of available data for complex systems, there is great interest in the extraction of physically relevant information from massive datasets. Recently, a framework called Sparse Identification of Nonlinear Dynamics (SINDy) has been introduced to identify the governing equations of dynamical systems from simulation data. In this study, we extend SINDy to stochastic dynamical systems which are frequently used to model biophysical processes. We prove the asymptotic correctness of stochastic SINDy in the infinite data limit, both in the original and projected variables. We discuss algorithms to solve the sparse regression problem arising from the practical implementation of SINDy and show that cross validation is an essential tool to determine the right level of sparsity. We demonstrate the proposed methodology on two test systems, namely, the diffusion in a one-dimensional potential and the projected dynamics of a two-dimensional diffusion process.

  15. Harnessing the power of multimedia in offender-based law enforcement information systems

    NASA Astrophysics Data System (ADS)

    Zimmerman, Alan P.

    1997-02-01

    Criminal offenders are increasingly administratively processed by automated multimedia information systems. During this processing, case and offender biographical data, mugshot photos, fingerprints and other valuable information and media are collected by law enforcement officers. As part of their criminal investigations, law enforcement officers are routinely called to solve criminal cases based upon limited evidence . . . evidence increasingly comprised of human DNA, ballistic casings and projectiles, chemical residues, latent fingerprints, surveillance camera facial images and voices. As multimedia systems receive greater use in law enforcement, traditional approaches used to index text data are not appropriate for images and signal data which comprise a multimedia database. Multimedia systems with integrated advanced pattern matching tools will provide law enforcement the ability to effectively locate multimedia information based upon content, without reliance upon the accuracy or completeness of text-based indexing.

  16. Issues on combining human and non-human intelligence

    NASA Technical Reports Server (NTRS)

    Statler, Irving C.; Connors, Mary M.

    1991-01-01

    The purpose here is to call attention to some of the issues confronting the designer of a system that combines human and non-human intelligence. We do not know how to design a non-human intelligence in such a way that it will fit naturally into a human organization. The author's concern is that, without adequate understanding and consideration of the behavioral and psychological limitations and requirements of the human member(s) of the system, the introduction of artificial intelligence (AI) subsystems can exacerbate operational problems. We have seen that, when these technologies are not properly applied, an overall degradation of performance at the system level can occur. Only by understanding how human and automated systems work together can we be sure that the problems introduced by automation are not more serious than the problems solved.

  17. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    NASA Astrophysics Data System (ADS)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  18. Triz in Mems

    NASA Astrophysics Data System (ADS)

    Apte, Prakash R.

    1999-11-01

    TRIZ is a Russian abbreviation. Genrich Altshuller developed it fifty years ago in the former Soviet Union. He examined thousands of inventions made in different technological systems and formulated a 'Theory of Inventive problem solving' (TRIZ). Altshuller's research of over fifty years on Creativity and Inventive Problem Solving has led to many different classifications, methods and tools of invention. Some of these are, Contradictions table, Level of inventions, Patterns in evolution of technological systems, ARIZ-Algorithm for Inventive Problem Solving, Diagnostic problem solving and Anticipatory Failure Determination. MEMS research consists of conceptual design, process technology and including of various Mechanical, ELectrical, Thermal, Magnetic, Acoustic and other effects. MEMS system s are now rapidly growing in complexity. Each system will thus follow one or more 'patterns of evolution' as given by Altshuller. This paper attempts to indicate how various TRIZ tools can be used in MEMS research activities.

  19. The mathematical statement for the solving of the problem of N-version software system design

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The N-version programming, as a methodology of the fault-tolerant software systems design, allows successful solving of the mentioned tasks. The use of N-version programming approach turns out to be effective, since the system is constructed out of several parallel executed versions of some software module. Those versions are written to meet the same specification but by different programmers. The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality.

  20. Solving TSP problem with improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying

    2018-05-01

    The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.

  1. Metaphor Clusters: Characterizing Instructor Metaphorical Reasoning on Limit Concepts in College Calculus

    ERIC Educational Resources Information Center

    Patel, Rita Manubhai; McCombs, Paul; Zollman, Alan

    2014-01-01

    Novice students have difficulty with the topic of limits in calculus. We believe this is in part because of the multiple perspectives and shifting metaphors available to solve items correctly. We investigated college calculus instructors' personal concepts of limits. Based upon previous research investigating introductory calculus student…

  2. Visualizing the Central Limit Theorem through Simulation

    ERIC Educational Resources Information Center

    Ruggieri, Eric

    2016-01-01

    The Central Limit Theorem is one of the most important concepts taught in an introductory statistics course, however, it may be the least understood by students. Sure, students can plug numbers into a formula and solve problems, but conceptually, do they really understand what the Central Limit Theorem is saying? This paper describes a simulation…

  3. Relativistic quantum Darwinism in Dirac fermion and graphene systems

    NASA Astrophysics Data System (ADS)

    Ni, Xuan; Huang, Liang; Lai, Ying-Cheng; Pecora, Louis

    2012-02-01

    We solve the Dirac equation in two spatial dimensions in the setting of resonant tunneling, where the system consists of two symmetric cavities connected by a finite potential barrier. The shape of the cavities can be chosen to yield both regular and chaotic dynamics in the classical limit. We find that certain pointer states about classical periodic orbits can exist, which are signatures of relativistic quantum Darwinism (RQD). These localized states suppress quantum tunneling, and the effect becomes less severe as the underlying classical dynamics in the cavity is chaotic, leading to regularization of quantum tunneling. Qualitatively similar phenomena have been observed in graphene. A physical theory is developed to explain relativistic quantum Darwinism and its effects based on the spectrum of complex eigenenergies of the non-Hermitian Hamiltonian describing the open cavity system.

  4. Unraveling mirror properties in time-delayed quantum feedback scenarios

    NASA Astrophysics Data System (ADS)

    Faulstich, Fabian M.; Kraft, Manuel; Carmele, Alexander

    2018-06-01

    We derive in the Heisenberg picture a widely used phenomenological coupling element to treat feedback effects in quantum optical platforms. Our derivation is based on a microscopic Hamiltonian, which describes the mirror-emitter dynamics based on a dielectric, a mediating fully quantized electromagnetic field and a single two-level system in front of the dielectric. The dielectric is modelled as a system of identical two-state atoms. The Heisenberg equation yields a system of describing differential operator equations, which we solve in the Weisskopf-Wigner limit. Due to a finite round-trip time between emitter and dielectric, we yield delay differential operator equations. Our derivation motivates and justifies the typical phenomenologicalassumed coupling element and allows, furthermore, a generalization to a variety of mirrors, such as dissipative mirrors or mirrors with gain dynamics.

  5. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  6. Drift-Alfven eigenmodes in inhomogeneous plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vranjes, J.; Poedts, S.

    2006-03-15

    A set of three nonlinear equations describing drift-Alfven waves in a nonuniform magnetized plasma is derived and discussed both in linear and nonlinear limits. In the case of a cylindric radially bounded plasma with a Gaussian density distribution in the radial direction the linearized equations are solved exactly yielding general solutions for modes with quantized frequencies and with radially dependent amplitudes. The full set of nonlinear equations is also solved yielding particular solutions in the form of rotating radially limited structures. The results should be applicable to the description of electromagnetic perturbations in solar magnetic structures and in astrophysical column-likemore » objects including cosmic tornados.« less

  7. Improved Linear Algebra Methods for Redshift Computation from Limited Spectrum Data - II

    NASA Technical Reports Server (NTRS)

    Foster, Leslie; Waagen, Alex; Aijaz, Nabella; Hurley, Michael; Luis, Apolo; Rinsky, Joel; Satyavolu, Chandrika; Gazis, Paul; Srivastava, Ashok; Way, Michael

    2008-01-01

    Given photometric broadband measurements of a galaxy, Gaussian processes may be used with a training set to solve the regression problem of approximating the redshift of this galaxy. However, in practice solving the traditional Gaussian processes equation is too slow and requires too much memory. We employed several methods to avoid this difficulty using algebraic manipulation and low-rank approximation, and were able to quickly approximate the redshifts in our testing data within 17 percent of the known true values using limited computational resources. The accuracy of one method, the V Formulation, is comparable to the accuracy of the best methods currently used for this problem.

  8. A genetic algorithm used for solving one optimization problem

    NASA Astrophysics Data System (ADS)

    Shipacheva, E. N.; Petunin, A. A.; Berezin, I. M.

    2017-12-01

    A problem of minimizing the length of the blank run for a cutting tool during cutting of sheet materials into shaped blanks is discussed. This problem arises during the preparation of control programs for computerized numerical control (CNC) machines. A discrete model of the problem is analogous in setting to the generalized travelling salesman problem with limitations in the form of precursor conditions determined by the technological features of cutting. A certain variant of a genetic algorithm for solving this problem is described. The effect of the parameters of the developed algorithm on the solution result for the problem with limitations is investigated.

  9. Is IPT Time-Limited Psychodynamic Psychotherapy?

    PubMed Central

    Markowitz, John C.; Svartberg, Martin; Swartz, Holly A.

    1998-01-01

    Interpersonal psychotherapy (IPT) has sometimes but not always been considered a psychodynamic psychotherapy. The authors discuss similarities and differences between IPT and short-term psychodynamic psychotherapy (STPP), comparing eight aspects: 1) time limit, 2) medical model, 3) dual goals of solving interpersonal problems and syndromal remission, 4) interpersonal focus on the patient solving current life problems, 5) specific techniques, 6) termination, 7) therapeutic stance, and 8) empirical support. The authors then apply both approaches to a case example of depression. They conclude that despite overlaps and similarities, IPT is distinct from STPP.(The Journal of Psychotherapy Practice and Research 1998; 7:185–195) PMID:9631340

  10. Performance of subjects with and without severe mental illness on a clinical test of problem solving.

    PubMed

    Marshall, R C; McGurk, S R; Karow, C M; Kairy, T J; Flashman, L A

    2006-06-01

    Severe mental illness is associated with impairments in executive functions, such as conceptual reasoning, planning, and strategic thinking all of which impact problem solving. The present study examined the utility of a novel assessment tool for problem solving, the Rapid Assessment of Problem Solving Test (RAPS) in persons with severe mental illness. Subjects were 47 outpatients with severe mental illness and an equal number healthy controls matched for age and gender. Results confirmed all hypotheses with respect to how subjects with severe mental illness would perform on the RAPS. Specifically, the severely mentally ill subjects (1) solved fewer problems on the RAPS, (2) when they did solve problems on the test, they did so far less efficiently than their healthy counterparts, and (3) the two groups differed markedly in the types of questions asked on the RAPS. The healthy control subjects tended to take a systematic, organized, but not always optimal approach to solving problems on the RAPS. The subjects with severe mental illness used some of the problem solving strategies of the healthy controls, but their performance was less consistent and tended to deteriorate when the complexity of the problem solving task increased. This was reflected by a high degree of guessing in lieu of asking constraint questions, particularly if a category-limited question was insufficient to continue the problem solving effort.

  11. Problem Solving and Learning

    NASA Astrophysics Data System (ADS)

    Singh, Chandralekha

    2009-07-01

    One finding of cognitive research is that people do not automatically acquire usable knowledge by spending lots of time on task. Because students' knowledge hierarchy is more fragmented, "knowledge chunks" are smaller than those of experts. The limited capacity of short term memory makes the cognitive load high during problem solving tasks, leaving few cognitive resources available for meta-cognition. The abstract nature of the laws of physics and the chain of reasoning required to draw meaningful inferences makes these issues critical. In order to help students, it is crucial to consider the difficulty of a problem from the perspective of students. We are developing and evaluating interactive problem-solving tutorials to help students in the introductory physics courses learn effective problem-solving strategies while solidifying physics concepts. The self-paced tutorials can provide guidance and support for a variety of problem solving techniques, and opportunity for knowledge and skill acquisition.

  12. What are the ultimate limits to computational techniques: verifier theory and unverifiability

    NASA Astrophysics Data System (ADS)

    Yampolskiy, Roman V.

    2017-09-01

    Despite significant developments in proof theory, surprisingly little attention has been devoted to the concept of proof verifiers. In particular, the mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences) as mathematical objects. Such an effort could reveal their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as self-verification and self-reference issues. We propose an initial classification system for verifiers and provide some rudimentary analysis of solved and open problems in this important domain. Our main contribution is a formal introduction of the notion of unverifiability, for which the paper could serve as a general citation in domains of theorem proving, as well as software and AI verification.

  13. Spectromicroscopy and coherent diffraction imaging: focus on energy materials applications.

    PubMed

    Hitchcock, Adam P; Toney, Michael F

    2014-09-01

    Current and future capabilities of X-ray spectromicroscopy are discussed based on coherence-limited imaging methods which will benefit from the dramatic increase in brightness expected from a diffraction-limited storage ring (DLSR). The methods discussed include advanced coherent diffraction techniques and nanoprobe-based real-space imaging using Fresnel zone plates or other diffractive optics whose performance is affected by the degree of coherence. The capabilities of current systems, improvements which can be expected, and some of the important scientific themes which will be impacted are described, with focus on energy materials applications. Potential performance improvements of these techniques based on anticipated DLSR performance are estimated. Several examples of energy sciences research problems which are out of reach of current instrumentation, but which might be solved with the enhanced DLSR performance, are discussed.

  14. A Microfluidic Chip Based on Localized Surface Plasmon Resonance for Real-Time Monitoring of Antigen-Antibody Reactions

    NASA Astrophysics Data System (ADS)

    Hiep, Ha Minh; Nakayama, Tsuyoshi; Saito, Masato; Yamamura, Shohei; Takamura, Yuzuru; Tamiya, Eiichi

    2008-02-01

    Localized surface plasmon resonance (LSPR) connecting to noble metal nanoparticles is an important issue for many analytical and biological applications. Therefore, the development of microfluidic LSPR chip that allows studying biomolecular interactions becomes an essential requirement for micro total analysis systems (µTAS) integration. However, miniaturized process of the conventional surface plasmon resonance system has been faced with some limitations, especially with the usage of Kretschmann configuration in total internal reflection mode. In this study, we have tried to solve this problem by proposing a novel microfluidic LSPR chip operated with a simple collinear optical system. The poly(dimethylsiloxane) (PDMS) based microfluidic chip was fabricated by soft-lithography technique and enables to interrogate specific insulin and anti-insulin antibody reaction in real-time after immobilizing antibody on its surface. Moreover, the sensing ability of microfluidic LSPR chip was also evaluated with various glucose concentrations. The kinetic constant of insulin and anti-insulin antibody was determined and the detection limit of 100 ng/mL insulin was archived.

  15. Absorbing multicultural states in the Axelrod model

    NASA Astrophysics Data System (ADS)

    Vazquez, Federico; Redner, Sidney

    2005-03-01

    We determine the ultimate fate of a limit of the Axelrod model that consists of a population of leftists, centrists, and rightists. In an elemental interaction between agents, a centrist and a leftist can both become centrists or both become leftists with equal rates (similarly for a centrist and a rightist), but leftists and rightists do not interact. This interaction is applied repeatedly until the system can no longer evolve. The constraint between extremists can lead to a frustrated final state where the system consists of only leftists and rightists. In the mean field limit, we can view the evolution of the system as the motion of a random walk in the 3-dimensional space whose coordinates correspond to the density of each species. We find the exact final state probabilities and the time to reach consensus by solving for the first-passage probability of the random walk to the corresponding absorbing boundaries. The extension to a larger number of states will be discussed. This approach is a first step towards the analytic solution of Axelrod-like models.

  16. EMPHASIS/Nevada UTDEM user guide. Version 2.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, C. David; Seidel, David Bruce; Pasik, Michael Francis

    The Unstructured Time-Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell's equations using finite-element techniques on unstructured meshes. This document provides user-specific information to facilitate the use of the code for applications of interest. UTDEM is a general-purpose code for solving Maxwell's equations on arbitrary, unstructured tetrahedral meshes. The geometries and the meshes thereof are limited only by the patience of the user in meshing and by the available computing resources for the solution. UTDEM solves Maxwell's equations using finite-element method (FEM) techniques on tetrahedral elements using vector, edge-conforming basis functions. EMPHASIS/Nevada Unstructured Time-Domain ElectroMagnetic Particle-In-Cell (UTDEM PIC) ismore » a superset of the capabilities found in UTDEM. It adds the capability to simulate systems in which the effects of free charge are important and need to be treated in a self-consistent manner. This is done by integrating the equations of motion for macroparticles (a macroparticle is an object that represents a large number of real physical particles, all with the same position and momentum) being accelerated by the electromagnetic forces upon the particle (Lorentz force). The motion of these particles results in a current, which is a source for the fields in Maxwell's equations.« less

  17. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  18. Collaborative problem solving with a total quality model.

    PubMed

    Volden, C M; Monnig, R

    1993-01-01

    A collaborative problem-solving system committed to the interests of those involved complies with the teachings of the total quality management movement in health care. Deming espoused that any quality system must become an integral part of routine activities. A process that is used consistently in dealing with problems, issues, or conflicts provides a mechanism for accomplishing total quality improvement. The collaborative problem-solving process described here results in quality decision-making. This model incorporates Ishikawa's cause-and-effect (fishbone) diagram, Moore's key causes of conflict, and the steps of the University of North Dakota Conflict Resolution Center's collaborative problem solving model.

  19. Optimal Capacity Proportion and Distribution Planning of Wind, Photovoltaic and Hydro Power in Bundled Transmission System

    NASA Astrophysics Data System (ADS)

    Ye, X.; Tang, Q.; Li, T.; Wang, Y. L.; Zhang, X.; Ye, S. Y.

    2017-05-01

    The wind, photovoltaic and hydro power bundled transmission system attends to become common in Northwest and Southwest of China. To make better use of the power complementary characteristic of different power sources, the installed capacity proportion of wind, photovoltaic and hydro power, and their capacity distribution for each integration node is a significant issue to be solved in power system planning stage. An optimal capacity proportion and capacity distribution model for wind, photovoltaic and hydro power bundled transmission system is proposed here, which considers the power out characteristic of power resources with different type and in different area based on real operation data. The transmission capacity limit of power grid is also considered in this paper. Simulation cases are tested referring to one real regional system in Southwest China for planning level year 2020. The results verify the effectiveness of the model in this paper.

  20. Solubility enhancement and delivery systems of curcumin a herbal medicine: a review.

    PubMed

    Hani, Umme; Shivakumar, H G

    2014-01-01

    Curcumin diferuloylmethane is a main yellow bioactive component of turmeric, possess wide spectrum of biological actions. It was found to have anti-inflammatory, antioxidant, anticarcinogenic, antimutagenic, anticoagulant, antifertility, antidiabetic, antibacterial, antifungal, antiprotozoal, antiviral, antifibrotic, antivenom, antiulcer, hypotensive and hypocholesteremic activities. However, the benefits are curtailed by its extremely poor aqueous solubility, which subsequently limits the bioavailability and therapeutic effects of curcumin. Nanotechnology is the available approach in solving these issues. Therapeutic efficacy of curcumin can be utilized effectively by doing improvement in formulation properties or delivery systems. Numerous attempts have been made to design a delivery system of curcumin. Currently, nanosuspensions, micelles, nanoparticles, nano-emulsions, etc. are used to improve the in vitro dissolution velocity and in vivo efficiency of curcumin. This review focuses on the methods to increase solubility of curcumin and various nanotechnologies based delivery systems and other delivery systems of curcumin.

  1. Quality improvement in basic histotechnology: the lean approach.

    PubMed

    Clark, David

    2016-01-01

    Lean is a comprehensive system of management based on the Toyota production system (TPS), encompassing all the activities of an organization. It focuses management activity on creating value for the end-user by continuously improving operational effectiveness and removing waste. Lean management creates a culture of continuous quality improvement with a strong emphasis on developing the problem-solving capability of staff using the scientific method (Deming's Plan, Do, Check, Act cycle). Lean management systems have been adopted by a number of histopathology departments throughout the world to simultaneously improve quality (reducing errors and shortening turnround times) and lower costs (by increasing efficiency). This article describes the key concepts that make up a lean management system, and how these concepts have been adapted from manufacturing industry and applied to histopathology using a case study of lean implementation and evidence from the literature. It discusses the benefits, limitations, and pitfalls encountered when implementing lean management systems.

  2. Quantum algorithm for solving some discrete mathematical problems by probing their energy spectra

    NASA Astrophysics Data System (ADS)

    Wang, Hefeng; Fan, Heng; Li, Fuli

    2014-01-01

    When a probe qubit is coupled to a quantum register that represents a physical system, the probe qubit will exhibit a dynamical response only when it is resonant with a transition in the system. Using this principle, we propose a quantum algorithm for solving discrete mathematical problems based on the circuit model. Our algorithm has favorable scaling properties in solving some discrete mathematical problems.

  3. Computer Software for Intelligent Systems.

    ERIC Educational Resources Information Center

    Lenat, Douglas B.

    1984-01-01

    Discusses the development and nature of computer software for intelligent systems, indicating that the key to intelligent problem-solving lies in reducing the random search for solutions. Formal reasoning methods, expert systems, and sources of power in problem-solving are among the areas considered. Specific examples of such software are…

  4. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    NASA Astrophysics Data System (ADS)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  5. Small signal analysis of four-wave mixing in InAs/GaAs quantum-dot semiconductor optical amplifiers

    NASA Astrophysics Data System (ADS)

    Ma, Shaozhen; Chen, Zhe; Dutta, Niloy K.

    2009-02-01

    A model to study four-wave mixing (FWM) wavelength conversion in InAs-GaAs quantum-dot semiconductor optical amplifier is proposed. Rate equations involving two QD states are solved to simulate the carrier density modulation in the system, results show that the existence of QD excited state contributes to the ultra fast recover time for single pulse response by serving as a carrier reservoir for the QD ground state, its speed limitations are also studied. Nondegenerate four-wave mixing process with small intensity modulation probe signal injected is simulated using this model, a set of coupled wave equations describing the evolution of all frequency components in the active region of QD-SOA are derived and solved numerically. Results show that better FWM conversion efficiency can be obtained compared with the regular bulk SOA, and the four-wave mixing bandwidth can exceed 1.5 THz when the detuning between pump and probe lights is 0.5 nm.

  6. Science modelling in pre-calculus: how to make mathematics problems contextually meaningful

    NASA Astrophysics Data System (ADS)

    Sokolowski, Andrzej; Yalvac, Bugrahan; Loving, Cathleen

    2011-04-01

    'Use of mathematical representations to model and interpret physical phenomena and solve problems is one of the major teaching objectives in high school math curriculum' (National Council of Teachers of Mathematics (NCTM), Principles and Standards for School Mathematics, NCTM, Reston, VA, 2000). Commonly used pre-calculus textbooks provide a wide range of application problems. However, these problems focus students' attention on evaluating or solving pre-arranged formulas for given values. The role of scientific content is reduced to provide a background for these problems instead of being sources of data gathering for inducing mathematical tools. Students are neither required to construct mathematical models based on the contexts nor are they asked to validate or discuss the limitations of applied formulas. Using these contexts, the instructor may think that he/she is teaching problem solving, where in reality he/she is teaching algorithms of the mathematical operations (G. Kulm (ed.), New directions for mathematics assessment, in Assessing Higher Order Thinking in Mathematics, Erlbaum, Hillsdale, NJ, 1994, pp. 221-240). Without a thorough representation of the physical phenomena and the mathematical modelling processes undertaken, problem solving unintentionally appears as simple algorithmic operations. In this article, we deconstruct the representations of mathematics problems from selected pre-calculus textbooks and explicate their limitations. We argue that the structure and content of those problems limits students' coherent understanding of mathematical modelling, and this could result in weak student problem-solving skills. Simultaneously, we explore the ways to enhance representations of those mathematical problems, which we have characterized as lacking a meaningful physical context and limiting coherent student understanding. In light of our discussion, we recommend an alternative to strengthen the process of teaching mathematical modelling - utilization of computer-based science simulations. Although there are several exceptional computer-based science simulations designed for mathematics classes (see, e.g. Kinetic Book (http://www.kineticbooks.com/) or Gizmos (http://www.explorelearning.com/)), we concentrate mainly on the PhET Interactive Simulations developed at the University of Colorado at Boulder (http://phet.colorado.edu/) in generating our argument that computer simulations more accurately represent the contextual characteristics of scientific phenomena than their textual descriptions.

  7. Constraint Embedding Technique for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with closure-constraints into an equivalent tree-topology system, and thus allows one to take advantage of the host of techniques available to the latter class of systems. This technology is highly suitable for the class of multibody systems where the closure-constraints are local, i.e., where they are confined to small groupings of bodies within the system. Important examples of such local closure-constraints are constraints associated with four-bar linkages, geared motors, differential suspensions, etc. One can eliminate these closure-constraints and convert the system into a tree-topology system by embedding the constraints directly into the system dynamics and effectively replacing the body groupings with virtual aggregate bodies. Once eliminated, one can apply the well-known results and algorithms for tree-topology systems to solve the dynamics of such closed-chain system.

  8. ALPS - A LINEAR PROGRAM SOLVER

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  9. Assessing the Internal Dynamics of Mathematical Problem Solving in Small Groups.

    ERIC Educational Resources Information Center

    Artzt, Alice F.; Armour-Thomas, Eleanor

    The purpose of this exploratory study was to examine the problem-solving behaviors and perceptions of (n=27) seventh-grade students as they worked on solving a mathematical problem within a small-group setting. An assessment system was developed that allowed for this analysis. To assess problem-solving behaviors within a small group a Group…

  10. Problem Solving of Newton's Second Law through a System of Total Mass Motion

    ERIC Educational Resources Information Center

    Abdullah, Helmi

    2014-01-01

    Nowadays, many researchers discovered various effective strategies in teaching physics, from traditional to modern strategy. However, research on physics problem solving is still inadequate. Physics problem is an integral part of physics learning and requires strategy to solve it. Besides that, problem solving is the best way to convey principle,…

  11. The active control strategy on the output power for photovoltaic-storage systems based on extended PQ-QV-PV Node

    NASA Astrophysics Data System (ADS)

    Xu, Chen; Zhou, Bao-Rong; Zhai, Jian-Wei; Zhang, Yong-Jun; Yi, Ying-Qi

    2017-05-01

    In order to solve the problem of voltage exceeding specified limits and improve the penetration of photovoltaic in distribution network, we can make full use of the active power regulation ability of energy storage(ES) and the reactive power regulation ability of grid-connected photovoltaic inverter to provide support of active power and reactive power for distribution network. A strategy of actively controlling the output power for photovoltaic-storage system based on extended PQ-QV-PV node by analyzing the voltage regulating mechanism of point of commom coupling(PCC) of photovoltaic with energy storage(PVES) by controlling photovoltaic inverter and energy storage. The strategy set a small wave range of voltage to every photovoltaic by making the type of PCC convert among PQ, PV and QV. The simulation results indicate that the active control method can provide a better solution to the problem of voltage exceeding specified limits when photovoltaic is connectted to electric distribution network.

  12. SU(N ) fermions in a one-dimensional harmonic trap

    NASA Astrophysics Data System (ADS)

    Laird, E. K.; Shi, Z.-Y.; Parish, M. M.; Levinsen, J.

    2017-09-01

    We conduct a theoretical study of SU (N ) fermions confined by a one-dimensional harmonic potential. First, we introduce a numerical approach for solving the trapped interacting few-body problem, by which one may obtain accurate energy spectra across the full range of interaction strengths. In the strong-coupling limit, we map the SU (N ) Hamiltonian to a spin-chain model. We then show that an existing, extremely accurate ansatz—derived for a Heisenberg SU(2) spin chain—is extendable to these N -component systems. Lastly, we consider balanced SU (N ) Fermi gases that have an equal number of particles in each spin state for N =2 ,3 ,4 . In the weak- and strong-coupling regimes, we find that the ground-state energies rapidly converge to their expected values in the thermodynamic limit with increasing atom number. This suggests that the many-body energetics of N -component fermions may be accurately inferred from the corresponding few-body systems of N distinguishable particles.

  13. A new modal superposition method for nonlinear vibration analysis of structures using hybrid mode shapes

    NASA Astrophysics Data System (ADS)

    Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat

    2018-07-01

    In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.

  14. Local (L, [epsilon])-Approximation of a Function of Single Variable: An Alternative Way to Define Limit

    ERIC Educational Resources Information Center

    Bokhari, M. A.; Yushau, B.

    2006-01-01

    At the start of a freshman calculus course, many students conceive the classical definition of limit as the most problematic part of calculus. They not only find it difficult to understand, but also consider it of no use while solving most of the limit problems and therefore, skip it. This paper reformulates the rigorous definition of limit, which…

  15. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    Continuing research is reported in a program aimed at the development of a robot computer problem solving system. The motivation and results are described of a theoretical investigation concerning the general properties of behavioral systems. Some of the important issues which a general theory of behavioral organization should encompass are outlined and discussed.

  16. Appendix M. Research Utilization and Problem Solving

    ERIC Educational Resources Information Center

    Jung, Charles

    The Research Utilization and Problem Solving (RUPS) Model--an instructional system designed to provide the needed competencies for an entire staff to engage in systems analysis and systems synthesis procedures prior to assessing educational needs and developing curriculum to meet the needs identified--is intended to facilitate the development of…

  17. Problem solving as intelligent retrieval from distributed knowledge sources

    NASA Technical Reports Server (NTRS)

    Chen, Zhengxin

    1987-01-01

    Distributed computing in intelligent systems is investigated from a different perspective. From the viewpoint that problem solving can be viewed as intelligent knowledge retrieval, the use of distributed knowledge sources in intelligent systems is proposed.

  18. A Decision Support System for Evaluating and Selecting Information Systems Projects

    NASA Astrophysics Data System (ADS)

    Deng, Hepu; Wibowo, Santoso

    2009-01-01

    This chapter presents a decision support system (DSS) for effectively solving the information systems (IS) project selection problem. The proposed DSS recognizes the multidimensional nature of the IS project selection problem, the availability of multicriteria analysis (MA) methods, and the preferences of the decision-maker (DM) on the use of specific MA methods in a given situation. A knowledge base consisting of IF-THEN production rules is developed for assisting the DM with a systematic adoption of the most appropriate method with the efficient use of the powerful reasoning and explanation capabilities of intelligent DSS. The idea of letting the problem to be solved determines the method to be used is incorporated into the proposed DSS. As a result, effective decisions can be made for solving the IS project selection problem. An example is presented to demonstrate the applicability of the proposed DSS for solving the problem of selecting IS projects in real world situations.

  19. Bio-Inspired Human-Level Machine Learning

    DTIC Science & Technology

    2015-10-25

    extensions to high-level cognitive functions such as anagram solving problem. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...extensions to high-level cognitive functions such as anagram solving problem. We expect that the bio-inspired human-level machine learning combined with...numbers of 1011 neurons and 1014 synaptic connections in the human brain. In previous work, we experimentally demonstrated the feasibility of cognitive

  20. Research and applications: Artificial intelligence

    NASA Technical Reports Server (NTRS)

    Raphael, B.; Fikes, R. E.; Chaitin, L. J.; Hart, P. E.; Duda, R. O.; Nilsson, N. J.

    1971-01-01

    A program of research in the field of artificial intelligence is presented. The research areas discussed include automatic theorem proving, representations of real-world environments, problem-solving methods, the design of a programming system for problem-solving research, techniques for general scene analysis based upon television data, and the problems of assembling an integrated robot system. Major accomplishments include the development of a new problem-solving system that uses both formal logical inference and informal heuristic methods, the development of a method of automatic learning by generalization, and the design of the overall structure of a new complete robot system. Eight appendices to the report contain extensive technical details of the work described.

  1. Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in Internet of Bio-Nano Things

    PubMed Central

    Akan, Ozgur B.

    2018-01-01

    We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics. PMID:29415019

  2. Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in Internet of Bio-Nano Things.

    PubMed

    Kuscu, Murat; Akan, Ozgur B

    2018-01-01

    We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics.

  3. Lecture Notes on Requirements Elicitation

    DTIC Science & Technology

    1994-03-01

    ability to abstract away from the details of a problem and design a system that not only solves the problem but incorporates cutting-edge technology and...sound argument is presented. You have the uncanny ability to abstract away from the details of a problem and design a system that not only solves the... problem - solving skills on your last project, where you were the principle requirements analyst. Your undergraduate degree is in mathematics , and you

  4. Secure provision of reactive power ancillary services in competitive electricity markets

    NASA Astrophysics Data System (ADS)

    El-Samahy, Ismael

    The research work presented in this thesis discusses various complex issues associated with reactive power management and pricing in the context of new operating paradigms in deregulated power systems, proposing appropriate policy solutions. An integrated two-level framework for reactive power management is set forth, which is both suitable for a competitive market and ensures a secure and reliable operation of the associated power system. The framework is generic in nature and can be adopted for any electricity market structure. The proposed hierarchical reactive power market structure comprises two stages: procurement of reactive power resources on a seasonal basis, and real-time reactive power dispatch. The main objective of the proposed framework is to provide appropriate reactive power support from service providers at least cost, while ensuring a secure operation of the power system. The proposed procurement procedure is based on a two-step optimization model. First, the marginal benefits of reactive power supply from each provider, with respect to system security, are obtained by solving a loadability-maximization problem subject to transmission security constraints imposed by voltage and thermal limits. Second, the selected set of generators is determined by solving an optimal power flow (OPF)-based auction. This auction maximizes a societal advantage function comprising generators' offers and their corresponding marginal benefits with respect to system security, and considering all transmission system constraints. The proposed procedure yields the selected set of generators and zonal price components, which would form the basis for seasonal contracts between the system operator and the selected reactive power service providers. The main objective of the proposed reactive power dispatch model is to minimize the total payment burden on the Independent System Operator (ISO), which is associated with reactive power dispatch. The real power generation is decoupled and assumed to be fixed during the reactive power dispatch procedures; however, the effect of reactive power on real power is considered in the model by calculating the required reduction in real power output of a generator due to an increase in its reactive power supply. In this case, real power generation is allowed to be rescheduled, within given limits, from the already dispatched levels obtained from the energy market clearing process. The proposed dispatch model achieves the main objective of an ISO in a competitive electricity market, which is to provide the required reactive power support from generators at least cost while ensuring a secure operation of the power system. The proposed reactive power procurement and dispatch models capture both the technical and economic aspects of power system operation in competitive electricity markets; however, from an optimization point of view, these models represent non-convex mixed integer non-linear programming (MINLP) problems due to the presence of binary variables associated with the different regions of reactive power operation in a synchronous generator. Such MINLP optimization problems are difficult to solve, especially for an actual power system. A novel Generator Reactive Power Classification (GRPC) algorithm is proposed in this thesis to address this issue, with the advantage of iteratively solving the optimization models as a series of non-linear programming (NLP) sub-problems. The proposed reactive power procurement and dispatch models are implemented and tested on the CIGRE 32-bus system, with several case studies that represent different practical operating scenarios. The developed models are also compared with other approaches for reactive power provision, and the results demonstrate the robustness and effectiveness of the proposed model. The results clearly reveal the main features of the proposed models for optimal provision of reactive power ancillary service, in order to suit the requirements of an ISO under today's stressed system conditions in a competitive market environment.

  5. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  6. A review on classification methods for solving fully fuzzy linear systems

    NASA Astrophysics Data System (ADS)

    Daud, Wan Suhana Wan; Ahmad, Nazihah; Aziz, Khairu Azlan Abd

    2015-12-01

    Fully Fuzzy Linear System (FFLS) exists when there are fuzzy numbers on both sides of the linear systems. This system is quite significant today since most of the linear systems play with uncertainties of parameters especially in mathematics, engineering and finance. Many researchers and practitioners used the FFLS to model their problem and they apply various methods to solve it. In this paper, we present the outcome of a comprehensive review that we have done on various methods used for solving the FFLS. We classify our findings based on parameters' type used for the FFLS either restricted or unrestricted. We also discuss some of the methods by illustrating numerical examples and identify the differences between the methods. Ultimately, we summarize all findings in a table. We hope this study will encourage researchers to appreciate the use of this method and with that it will be easier for them to choose the right method or to propose any new method for solving the FFLS.

  7. Social Problem Solving and Depressive Symptoms over Time: A Randomized Clinical Trial of Cognitive-Behavioral Analysis System of Psychotherapy, Brief Supportive Psychotherapy, and Pharmacotherapy

    ERIC Educational Resources Information Center

    Klein, Daniel N.; Leon, Andrew C.; Li, Chunshan; D'Zurilla, Thomas J.; Black, Sarah R.; Vivian, Dina; Dowling, Frank; Arnow, Bruce A.; Manber, Rachel; Markowitz, John C.; Kocsis, James H.

    2011-01-01

    Objective: Depression is associated with poor social problem solving, and psychotherapies that focus on problem-solving skills are efficacious in treating depression. We examined the associations between treatment, social problem solving, and depression in a randomized clinical trial testing the efficacy of psychotherapy augmentation for…

  8. Using Problem-solving Therapy to Improve Problem-solving Orientation, Problem-solving Skills and Quality of Life in Older Hemodialysis Patients.

    PubMed

    Erdley-Kass, Shiloh D; Kass, Darrin S; Gellis, Zvi D; Bogner, Hillary A; Berger, Andrea; Perkins, Robert M

    2017-08-24

    To determine the effectiveness of Problem-Solving Therapy (PST) in older hemodialysis (HD) patients by assessing changes in health-related quality of life and problem-solving skills. 33 HD patients in an outpatient hemodialysis center without active medical and psychiatric illness were enrolled. The intervention group (n = 15) received PST from a licensed social worker for 6 weeks, whereas the control group (n = 18) received usual care treatment. In comparison to the control group, patients receiving PST intervention reported improved perceptions of mental health, were more likely to view their problems with a positive orientation and were more likely to use functional problem-solving methods. Furthermore, this group was also more likely to view their overall health, activity limits, social activities and ability to accomplish desired tasks with a more positive mindset. The results demonstrate that PST may positively impact mental health components of quality of life and problem-solving coping among older HD patients. PST is an effective, efficient, and easy to implement intervention that can benefit problem-solving abilities and mental health-related quality of life in older HD patients. In turn, this will help patients manage their daily living activities related to their medical condition and reduce daily stressors.

  9. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.

  10. Personal and parental problem drinking: effects on problem-solving performance and self-appraisal.

    PubMed

    Slavkin, S L; Heimberg, R G; Winning, C D; McCaffrey, R J

    1992-01-01

    This study examined the problem-solving performances and self-appraisals of problem-solving ability of college-age subjects with and without parental history of problem drinking. Contrary to our predictions, children of problem drinkers (COPDs) were rated as somewhat more effective in their problem-solving skills than non-COPDs, undermining prevailing assumptions about offspring from alcoholic households. While this difference was not large and was qualified by other variables, subjects' own alcohol abuse did exert a detrimental effect on problem-solving performance, regardless of parental history of problem drinking. However, a different pattern was evident for problem-solving self-appraisals. Alcohol-abusing non-COPDs saw themselves as effective problem-solvers while alcohol-abusing COPDs appraised themselves as poor problem-solvers. In addition, the self-appraisals of alcohol-abusing COPDs were consistent with objective ratings of solution effectiveness (i.e., they were both negative) while alcohol-abusing non-COPDs were overly positive in their appraisals, opposing the judgments of trained raters. This finding suggests that the relationship between personal alcohol abuse and self-appraised problem-solving abilities may differ as a function of parental history of problem drinking. Limitations on the generalizability of findings are addressed.

  11. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity.

    PubMed

    Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred

    2016-08-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.

  12. User localization in complex environments by multimodal combination of GPS, WiFi, RFID, and pedometer technologies.

    PubMed

    Dao, Trung-Kien; Nguyen, Hung-Long; Pham, Thanh-Thuy; Castelli, Eric; Nguyen, Viet-Tung; Nguyen, Dinh-Van

    2014-01-01

    Many user localization technologies and methods have been proposed for either indoor or outdoor environments. However, each technology has its own drawbacks. Recently, many researches and designs have been proposed to build a combination of multiple localization technologies system which can provide higher precision results and solve the limitation in each localization technology alone. In this paper, a conceptual design of a general localization platform using combination of multiple localization technologies is introduced. The combination is realized by dividing spaces into grid points. To demonstrate this platform, a system with GPS, RFID, WiFi, and pedometer technologies is established. Experiment results show that the accuracy and availability are improved in comparison with each technology individually.

  13. Intelligent tutoring using HyperCLIPS

    NASA Technical Reports Server (NTRS)

    Hill, Randall W., Jr.; Pickering, Brad

    1990-01-01

    HyperCard is a popular hypertext-like system used for building user interfaces to databases and other applications, and CLIPS is a highly portable government-owned expert system shell. We developed HyperCLIPS in order to fill a gap in the U.S. Army's computer-based instruction tool set; it was conceived as a development environment for building adaptive practical exercises for subject-matter problem-solving, though it is not limited to this approach to tutoring. Once HyperCLIPS was developed, we set out to implement a practical exercise prototype using HyperCLIPS in order to demonstrate the following concepts: learning can be facilitated by doing; student performance evaluation can be done in real-time; and the problems in a practical exercise can be adapted to the individual student's knowledge.

  14. User Localization in Complex Environments by Multimodal Combination of GPS, WiFi, RFID, and Pedometer Technologies

    PubMed Central

    Dao, Trung-Kien; Nguyen, Hung-Long; Pham, Thanh-Thuy; Nguyen, Viet-Tung; Nguyen, Dinh-Van

    2014-01-01

    Many user localization technologies and methods have been proposed for either indoor or outdoor environments. However, each technology has its own drawbacks. Recently, many researches and designs have been proposed to build a combination of multiple localization technologies system which can provide higher precision results and solve the limitation in each localization technology alone. In this paper, a conceptual design of a general localization platform using combination of multiple localization technologies is introduced. The combination is realized by dividing spaces into grid points. To demonstrate this platform, a system with GPS, RFID, WiFi, and pedometer technologies is established. Experiment results show that the accuracy and availability are improved in comparison with each technology individually. PMID:25147866

  15. The UTRC wind energy conversion system performance analysis for horizontal axis wind turbines (WECSPER)

    NASA Technical Reports Server (NTRS)

    Egolf, T. A.; Landgrebe, A. J.

    1981-01-01

    The theory for the UTRC Energy Conversion System Performance Analysis (WECSPER) for the prediction of horizontal axis wind turbine performance is presented. Major features of the analysis are the ability to: (1) treat the wind turbine blades as lifting lines with a prescribed wake model; (2) solve for the wake-induced inflow and blade circulation using real nonlinear airfoil data; and (3) iterate internally to obtain a compatible wake transport velocity and blade loading solution. This analysis also provides an approximate treatment of wake distortions due to tower shadow or wind shear profiles. Finally, selected results of internal UTRC application of the analysis to existing wind turbines and correlation with limited test data are described.

  16. Cost-effective use of minicomputers to solve structural problems

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Foster, E. P.

    1978-01-01

    Minicomputers are receiving increased use throughout the aerospace industry. Until recently, their use focused primarily on process control and numerically controlled tooling applications, while their exposure to and the opportunity for structural calculations has been limited. With the increased availability of this computer hardware, the question arises as to the feasibility and practicality of carrying out comprehensive structural analysis on a minicomputer. This paper presents results on the potential for using minicomputers for structural analysis by (1) selecting a comprehensive, finite-element structural analysis system in use on large mainframe computers; (2) implementing the system on a minicomputer; and (3) comparing the performance of the minicomputers with that of a large mainframe computer for the solution to a wide range of finite element structural analysis problems.

  17. Simulating propagation of coherent light in random media using the Fredholm type integral equation

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2017-06-01

    Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.

  18. Effect of Tutorial Giving on The Topic of Special Theory of Relativity in Modern Physics Course Towards Students’ Problem-Solving Ability

    NASA Astrophysics Data System (ADS)

    Hartatiek; Yudyanto; Haryoto, Dwi

    2017-05-01

    A Special Theory of Relativity handbook has been successfully arranged to guide students tutorial activity in the Modern Physics course. The low of students’ problem-solving ability was overcome by giving the tutorial in addition to the lecture class. It was done due to the limited time in the class during the course to have students do some exercises for their problem-solving ability. The explicit problem-solving based tutorial handbook was written by emphasizing to this 5 problem-solving strategies: (1) focus on the problem, (2) picture the physical facts, (3) plan the solution, (4) solve the problem, and (5) check the result. This research and development (R&D) consisted of 3 main steps: (1) preliminary study, (2) draft I. product development, and (3) product validation. The developed draft product was validated by experts to measure the feasibility of the material and predict the effect of the tutorial giving by means of questionnaires with scale 1 to 4. The students problem-solving ability in Special Theory of Relativity showed very good qualification. It implied that the tutorial giving with the help of tutorial handbook increased students problem-solving ability. The empirical test revealed that the developed handbook was significantly affected in improving students’ mastery concept and problem-solving ability. Both students’ mastery concept and problem-solving ability were in middle category with gain of 0.31 and 0.41, respectively.

  19. Stability of a rigid rotor supported on oil-film journal bearings under dynamic load

    NASA Technical Reports Server (NTRS)

    Majumdar, B. C.; Brewe, D. E.

    1987-01-01

    Most published work relating to dynamically loaded journal bearings are directed to determining the minimum film thickness from the predicted journal trajectories. These do not give any information about the subsynchronous whirl stability of journal bearing systems since they do not consider the equations of motion. It is, however, necessary to know whether the bearing system operation is stable or not under such an operating condition. The stability characteristics of the system are analyzed. A linearized perturbation theory about the equilibrium point can predict the threshold of stability; however it does not indicate postwhirl orbit detail. The linearized method may indicate that a bearing is unstable for a given operating condition whereas the nonlinear analysis may indicate that it forms a stable limit cycle. For this reason, a nonlinear transient analysis of a rigid rotor supported on oil journal bearings under: (1) a unidirectional constant load, (2) a unidirectional periodic load, and (3) variable rotating load are performed. The hydrodynamic forces are calculated after solving the time-dependent Reynolds equation by a finite difference method with a successive overrelaxation scheme. Using these forces, equations of motion are solved by the fourth-order Runge-Kutta method to predict the transient behavior of the rotor. With the aid of a high-speed digital computer and graphics, the journal trajectories are obtained for several different operating conditions.

  20. [Design of Adjustable Magnetic Field Generating Device in the Capsule Endoscope Tracking System].

    PubMed

    Ruan, Chao; Guo, Xudong; Yang, Fei

    2015-08-01

    The capsule endoscope swallowed from the mouth into the digestive system can capture the images of important gastrointestinal tract regions. It can compensate for the blind spot of traditional endoscopic techniques. It enables inspection of the digestive system without discomfort or need for sedation. However, currently available clinical capsule endoscope has some limitations such as the diagnostic information being not able to correspond to the orientation in the body, since the doctor is unable to control the capsule motion and orientation. To solve the problem, it is significant to track the position and orientation of the capsule in the human body. This study presents an AC excitation wireless tracking method in the capsule endoscope, and the sensor embedded in the capsule can measure the magnetic field generated by excitation coil. And then the position and orientation of the capsule can be obtained by solving a magnetic field inverse problem. Since the magnetic field decays with distance dramatically, the dynamic range of the received signal spans three orders of magnitude, we designed an adjustable alternating magnetic field generating device. The device can adjust the strength of the alternating magnetic field automatically through the feedback signal from the sensor. The prototype experiment showed that the adjustable magnetic field generating device was feasible. It could realize the automatic adjustment of the magnetic field strength successfully, and improve the tracking accuracy.

  1. Professional Learning through the Collaborative Design of Problem-Solving Lessons

    ERIC Educational Resources Information Center

    Wake, Geoff; Swan, Malcolm; Foster, Colin

    2016-01-01

    This article analyses lesson study as a mode of professional learning, focused on the development of mathematical problem solving processes, using the lens of cultural-historical activity theory. In particular, we draw attention to two activity systems, the classroom system and the lesson-study system, and the importance of making artefacts…

  2. Ergonomics and workplace design: application of Ergo-UAS System in Fiat Group Automobiles.

    PubMed

    Vitello, M; Galante, L G; Capoccia, M; Caragnano, G

    2012-01-01

    Since 2008 Fiat Group Automobiles has introduced Ergo-UAS system for the balancing of production lines and to detect ergonomic issues. Ergo-UAS system integrates 2 specific methods: MTM-UAS for time measurement and EAWS as ergonomic method to evaluate biomechanical effort for each workstation. Fiat is using a software system to manage time evaluation and ergo characterization of production cycle (UAS) to perform line balancing and obtain allowance factor in all Italian car manufacturing plant. For new car models, starting from New Panda, FGA is applying Ergo-UAS for workplace design since the earliest phase of product development. This means that workplace design is based on information about new product, new layout, new work organization and is performed by a multidisciplinary team (Work Place Integration Team), focusing on several aspects of product and process: safety, quality and productivity. This allows to find and solve ergonomic threats before the start of production, by means of a strict cooperation between product development, engineering and design, manufacturing. Three examples of workstation design are presented in which application of Ergo-UAS was determinant to find out initial excessive levels of biomechanical load and helped the process designer to improve the workstations and define limits of acceptability. Technical activities (on product or on process), or organizational changes, that have been implemented in order to solve the problems are presented. A comparison between "before" and "new" ergonomic scores necessary to bring workstations in acceptable conditions were made.

  3. Stressors and Caregivers' Depression: Multiple Mediators of Self-Efficacy, Social Support, and Problem-Solving Skill.

    PubMed

    Tang, Fengyan; Jang, Heejung; Lingler, Jennifer; Tamres, Lisa K; Erlen, Judith A

    2015-01-01

    Caring for an older adult with memory loss is stressful. Caregiver stress could produce negative outcomes such as depression. Previous research is limited in examining multiple intermediate pathways from caregiver stress to depressive symptoms. This study addresses this limitation by examining the role of self-efficacy, social support, and problem solving in mediating the relationships between caregiver stressors and depressive symptoms. Using a sample of 91 family caregivers, we tested simultaneously multiple mediators between caregiver stressors and depression. Results indicate that self-efficacy mediated the pathway from daily hassles to depression. Findings point to the importance of improving self-efficacy in psychosocial interventions for caregivers of older adults with memory loss.

  4. Subspace projection method for unstructured searches with noisy quantum oracles using a signal-based quantum emulation device

    NASA Astrophysics Data System (ADS)

    La Cour, Brian R.; Ostrove, Corey I.

    2017-01-01

    This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.

  5. A test of the circumvention-of-limits hypothesis in scientific problem solving: the case of geological bedrock mapping.

    PubMed

    Hambrick, David Z; Libarkin, Julie C; Petcovic, Heather L; Baker, Kathleen M; Elkins, Joe; Callahan, Caitlin N; Turner, Sheldon P; Rench, Tara A; Ladue, Nicole D

    2012-08-01

    Sources of individual differences in scientific problem solving were investigated. Participants representing a wide range of experience in geology completed tests of visuospatial ability and geological knowledge, and performed a geological bedrock mapping task, in which they attempted to infer the geological structure of an area in the Tobacco Root Mountains of Montana. A Visuospatial Ability × Geological Knowledge interaction was found, such that visuospatial ability positively predicted mapping performance at low, but not high, levels of geological knowledge. This finding suggests that high levels of domain knowledge may sometimes enable circumvention of performance limitations associated with cognitive abilities. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  6. An optical solution for the traveling salesman problem.

    PubMed

    Haist, Tobias; Osten, Wolfgang

    2007-08-06

    We introduce an optical method based on white light interferometry in order to solve the well-known NP-complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non-polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to N(N) for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal-to-noise ratio. The proposed method is meant purely as a gedankenexperiment.

  7. Discovering Steiner Triple Systems through Problem Solving

    ERIC Educational Resources Information Center

    Sriraman, Bharath

    2004-01-01

    An attempt to implement problem solving as a teacher of ninth grade algebra is described. The problems selected were not general ones, they involved combinations and represented various situations and were more complex which lead to the discovery of Steiner triple systems.

  8. The Study on Network Examinational Database based on ASP Technology

    NASA Astrophysics Data System (ADS)

    Zhang, Yanfu; Han, Yuexiao; Zhou, Yanshuang

    This article introduces the structure of the general test base system based on .NET technology, discussing the design of the function modules and its implementation methods. It focuses on key technology of the system, proposing utilizing the WEB online editor control to solve the input problem and regular expression to solve the problem HTML code, making use of genetic algorithm to optimize test paper and the automated tools of WORD to solve the problem of exporting papers and others. Practical effective design and implementation technology can be used as reference for the development of similar systems.

  9. Artificial intelligence in robot control systems

    NASA Astrophysics Data System (ADS)

    Korikov, A.

    2018-05-01

    This paper analyzes modern concepts of artificial intelligence and known definitions of the term "level of intelligence". In robotics artificial intelligence system is defined as a system that works intelligently and optimally. The author proposes to use optimization methods for the design of intelligent robot control systems. The article provides the formalization of problems of robotic control system design, as a class of extremum problems with constraints. Solving these problems is rather complicated due to the high dimensionality, polymodality and a priori uncertainty. Decomposition of the extremum problems according to the method, suggested by the author, allows reducing them into a sequence of simpler problems, that can be successfully solved by modern computing technology. Several possible approaches to solving such problems are considered in the article.

  10. The generic task toolset: High level languages for the construction of planning and problem solving systems

    NASA Technical Reports Server (NTRS)

    Chandrasekaran, B.; Josephson, J.; Herman, D.

    1987-01-01

    The current generation of languages for the construction of knowledge-based systems as being at too low a level of abstraction is criticized, and the need for higher level languages for building problem solving systems is advanced. A notion of generic information processing tasks in knowledge-based problem solving is introduced. A toolset which can be used to build expert systems in a way that enhances intelligibility and productivity in knowledge acquistion and system construction is described. The power of these ideas is illustrated by paying special attention to a high level language called DSPL. A description is given of how it was used in the construction of a system called MPA, which assists with planning in the domain of offensive counter air missions.

  11. Small-x asymptotics of the quark helicity distribution: Analytic results

    DOE PAGES

    Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.

    2017-06-15

    In this Letter, we analytically solve the evolution equations for the small-x asymptotic behavior of the (flavor singlet) quark helicity distribution in the large- N c limit. Here, these evolution equations form a set of coupled integro-differential equations, which previously could only be solved numerically. This approximate numerical solution, however, revealed simplifying properties of the small-x asymptotics, which we exploit here to obtain an analytic solution.

  12. HIPPO Unit Commitment Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-01-17

    Developed for the Midcontinent Independent System Operator, Inc. (MISO), HIPPO-Unit Commitment Version 1 is for solving security constrained unit commitment problem. The model was developed to solve MISO's cases. This version of codes includes I/O module to read in MISO's csv files, modules to create a state-based mixed integer programming formulation for solving MIP, and modules to test basic procedures to solve MIP via HPC.

  13. Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation

    NASA Technical Reports Server (NTRS)

    Layton, Charles; Smith, Philip J.; Mc Coy, C. Elaine

    1994-01-01

    Both optimization techniques and expert systems technologies are popular approaches for developing tools to assist in complex problem-solving tasks. Because of the underlying complexity of many such tasks, however, the models of the world implicitly or explicitly embedded in such tools are often incomplete and the problem-solving methods fallible. The result can be 'brittleness' in situations that were not anticipated by the system designers. To deal with this weakness, it has been suggested that 'cooperative' rather than 'automated' problem-solving systems be designed. Such cooperative systems are proposed to explicitly enhance the collaboration of the person (or a group of people) and the computer system. This study evaluates the impact of alternative design concepts on the performance of 30 airline pilots interacting with such a cooperative system designed to support en-route flight planning. The results clearly demonstrate that different system design concepts can strongly influence the cognitive processes and resultant performances of users. Based on think-aloud protocols, cognitive models are proposed to account for how features of the computer system interacted with specific types of scenarios to influence exploration and decision making by the pilots. The results are then used to develop recommendations for guiding the design of cooperative systems.

  14. Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation

    NASA Technical Reports Server (NTRS)

    Layton, Charles; Smith, Philip J.; McCoy, C. Elaine

    1994-01-01

    Both optimization techniques and expert systems technologies are popular approaches for developing tools to assist in complex problem-solving tasks. Because of the underlying complexity of many such tasks, however, the models of the world implicitly or explicitly embedded in such tools are often incomplete and the problem-solving methods fallible. The result can be 'brittleness' in situations that were not anticipated by the system designers. To deal with this weakness, it has been suggested that 'cooperative' rather than 'automated' problem-solving systems be designed. Such cooperative systems are proposed to explicitly enhance the collaboration of the person (or a group of people) and the computer system. This study evaluates the impact of alternative design concepts on the performance of 30 airline pilots interacting with such a cooperative system designed to support enroute flight planning. The results clearly demonstrate that different system design concepts can strongly influence the cognitive processes and resultant performances of users. Based on think-aloud protocols, cognitive models are proposed to account for how features of the computer system interacted with specific types of scenarios to influence exploration and decision making by the pilots. The results are then used to develop recommendations for guiding the design of cooperative systems.

  15. The pandemonium system of reflective agents.

    PubMed

    Smieja, F

    1996-01-01

    The Pandemonium system of reflective MINOS agents solves problems by automatic dynamic modularization of the input space. The agents contain feedforward neural networks which adapt using the backpropagation algorithm. We demonstrate the performance of Pandemonium on various categories of problems. These include learning continuous functions with discontinuities, separating two spirals, learning the parity function, and optical character recognition. It is shown how strongly the advantages gained from using a modularization technique depend on the nature of the problem. The superiority of the Pandemonium method over a single net on the first two test categories is contrasted with its limited advantages for the second two categories. In the first case the system converges quicker with modularization and is seen to lead to simpler solutions. For the second case the problem is not significantly simplified through flat decomposition of the input space, although convergence is still quicker.

  16. A QoS Aware Resource Allocation Strategy for 3D A/V Streaming in OFDMA Based Wireless Systems

    PubMed Central

    Chung, Young-uk; Choi, Yong-Hoon; Park, Suwon; Lee, Hyukjoon

    2014-01-01

    Three-dimensional (3D) video is expected to be a “killer app” for OFDMA-based broadband wireless systems. The main limitation of 3D video streaming over a wireless system is the shortage of radio resources due to the large size of the 3D traffic. This paper presents a novel resource allocation strategy to address this problem. In the paper, the video-plus-depth 3D traffic type is considered. The proposed resource allocation strategy focuses on the relationship between 2D video and the depth map, handling them with different priorities. It is formulated as an optimization problem and is solved using a suboptimal heuristic algorithm. Numerical results show that the proposed scheme provides a better quality of service compared to conventional schemes. PMID:25250377

  17. Performance Models for Split-execution Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Schrock, Jonathan

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less

  18. Comparative Risk Analysis for Metropolitan Solid Waste Management Systems

    NASA Astrophysics Data System (ADS)

    Chang, Ni-Bin; Wang, S. F.

    1996-01-01

    Conventional solid waste management planning usually focuses on economic optimization, in which the related environmental impacts or risks are rarely considered. The purpose of this paper is to illustrate the methodology of how optimization concepts and techniques can be applied to structure and solve risk management problems such that the impacts of air pollution, leachate, traffic congestion, and noise increments can be regulated in the iong-term planning of metropolitan solid waste management systems. Management alternatives are sequentially evaluated by adding several environmental risk control constraints stepwise in an attempt to improve the management strategies and reduce the risk impacts in the long run. Statistics associated with those risk control mechanisms are presented as well. Siting, routing, and financial decision making in such solid waste management systems can also be achieved with respect to various resource limitations and disposal requirements.

  19. A low noise photoelectric signal acquisition system applying in nuclear magnetic resonance gyroscope

    NASA Astrophysics Data System (ADS)

    Lu, Qilin; Zhang, Xian; Zhao, Xinghua; Yang, Dan; Zhou, Binquan; Hu, Zhaohui

    2017-10-01

    The nuclear magnetic resonance gyroscope serves as a new generation of strong support for the development of high-tech weapons, it solves the core problem that limits the development of the long-playing seamless navigation and positioning. In the NMR gyroscope, the output signal with atomic precession frequency is detected by the probe light, the final crucial photoelectric signal of the probe light directly decides the quality of the gyro signal. But the output signal has high sensitivity, resolution and measurement accuracy for the photoelectric detection system. In order to detect the measured signal better, this paper proposed a weak photoelectric signal rapid acquisition system, which has high SNR and the frequency of responded signal is up to 100 KHz to let the weak output signal with high frequency of the NMR gyroscope can be detected better.

  20. Photon scattering from a system of multilevel quantum emitters. I. Formalism

    NASA Astrophysics Data System (ADS)

    Das, Sumanta; Elfving, Vincent E.; Reiter, Florentin; Sørensen, Anders S.

    2018-04-01

    We introduce a formalism to solve the problem of photon scattering from a system of multilevel quantum emitters. Our approach provides a direct solution of the scattering dynamics. As such the formalism gives the scattered fields' amplitudes in the limit of a weak incident intensity. Our formalism is equipped to treat both multiemitter and multilevel emitter systems, and is applicable to a plethora of photon-scattering problems, including conditional state preparation by photodetection. In this paper, we develop the general formalism for an arbitrary geometry. In the following paper (part II) S. Das et al. [Phys. Rev. A 97, 043838 (2018), 10.1103/PhysRevA.97.043838], we reduce the general photon-scattering formalism to a form that is applicable to one-dimensional waveguides and show its applicability by considering explicit examples with various emitter configurations.

  1. The method of space-time and conservation element and solution element: A new approach for solving the Navier-Stokes and Euler equations

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1995-01-01

    A new numerical framework for solving conservation laws is being developed. This new framework differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to overcome several key limitations of the above traditional methods. A two-level scheme for solving the convection-diffusion equation is constructed and used to illuminate the major differences between the present method and those previously mentioned. This explicit scheme, referred to as the a-mu scheme, has two independent marching variables.

  2. Reduction of the two dimensional stationary Navier-Stokes problem to a sequence of Fredholm integral equations of the second kind

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.

    1981-01-01

    Present approaches to solving the stationary Navier-Stokes equations are of limited value; however, there does exist an equivalent representation of the problem that has significant potential in solving such problems. This is due to the fact that the equivalent representation consists of a sequence of Fredholm integral equations of the second kind, and the solving of this type of problem is very well developed. For the problem in this form, there is an excellent chance to also determine explicit error estimates, since bounded, rather than unbounded, linear operators are dealt with.

  3. Numerical algebraic geometry: a new perspective on gauge and string theories

    NASA Astrophysics Data System (ADS)

    Mehta, Dhagash; He, Yang-Hui; Hauensteine, Jonathan D.

    2012-07-01

    There is a rich interplay between algebraic geometry and string and gauge theories which has been recently aided immensely by advances in computational algebra. However, symbolic (Gröbner) methods are severely limited by algorithmic issues such as exponential space complexity and being highly sequential. In this paper, we introduce a novel paradigm of numerical algebraic geometry which in a plethora of situations overcomes these shortcomings. The so-called `embarrassing parallelizability' allows us to solve many problems and extract physical information which elude symbolic methods. We describe the method and then use it to solve various problems arising from physics which could not be otherwise solved.

  4. New Galerkin operational matrices for solving Lane-Emden type equations

    NASA Astrophysics Data System (ADS)

    Abd-Elhameed, W. M.; Doha, E. H.; Saad, A. S.; Bassuony, M. A.

    2016-04-01

    Lane-Emden type equations model many phenomena in mathematical physics and astrophysics, such as thermal explosions. This paper is concerned with introducing third and fourth kind Chebyshev-Galerkin operational matrices in order to solve such problems. The principal idea behind the suggested algorithms is based on converting the linear or nonlinear Lane-Emden problem, through the application of suitable spectral methods, into a system of linear or nonlinear equations in the expansion coefficients, which can be efficiently solved. The main advantage of the proposed algorithm in the linear case is that the resulting linear systems are specially structured, and this of course reduces the computational effort required to solve such systems. As an application, we consider the solar model polytrope with n=3 to show that the suggested solutions in this paper are in good agreement with the numerical results.

  5. A Cognitive Simulator for Learning the Nature of Human Problem Solving

    NASA Astrophysics Data System (ADS)

    Miwa, Kazuhisa

    Problem solving is understood as a process through which states of problem solving are transferred from the initial state to the goal state by applying adequate operators. Within this framework, knowledge and strategies are given as operators for the search. One of the most important points of researchers' interest in the domain of problem solving is to explain the performance of problem solving behavior based on the knowledge and strategies that the problem solver has. We call the interplay between problem solvers' knowledge/strategies and their behavior the causal relation between mental operations and behavior. It is crucially important, we believe, for novice learners in this domain to understand the causal relation between mental operations and behavior. Based on this insight, we have constructed a learning system in which learners can control mental operations of a computational agent that solves a task, such as knowledge, heuristics, and cognitive capacity, and can observe its behavior. We also introduce this system to a university class, and discuss which findings were discovered by the participants.

  6. Chemical Equation Balancing.

    ERIC Educational Resources Information Center

    Blakley, G. R.

    1982-01-01

    Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)

  7. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.

  8. Theory of the Origin, Evolution, and Nature of Life

    PubMed Central

    Andrulis, Erik D.

    2011-01-01

    Life is an inordinately complex unsolved puzzle. Despite significant theoretical progress, experimental anomalies, paradoxes, and enigmas have revealed paradigmatic limitations. Thus, the advancement of scientific understanding requires new models that resolve fundamental problems. Here, I present a theoretical framework that economically fits evidence accumulated from examinations of life. This theory is based upon a straightforward and non-mathematical core model and proposes unique yet empirically consistent explanations for major phenomena including, but not limited to, quantum gravity, phase transitions of water, why living systems are predominantly CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur), homochirality of sugars and amino acids, homeoviscous adaptation, triplet code, and DNA mutations. The theoretical framework unifies the macrocosmic and microcosmic realms, validates predicted laws of nature, and solves the puzzle of the origin and evolution of cellular life in the universe. PMID:25382118

  9. Progressive upper limb prosthetics.

    PubMed

    Lake, Chris; Dodson, Robert

    2006-02-01

    The field of upper extremity prosthetics is a constantly changing arena as researchers and prosthetists strive to bridge the gap between prosthetic reality and upper limb physiology. With the further development of implantable neurologic sensing devices and targeted muscle innervation (discussed elsewhere in this issue), the challenge of limited input to control vast outputs promises to become a historical footnote in the future annals of upper limb prosthetics. Soon multidextrous terminal devices, such as that found in the iLimb system(Touch EMAS, Inc., Edinburgh, UK), will be a clinical reality (Fig. 22). Successful prosthetic care depends on good communication and cooperation among the surgeon, the amputee, the rehabilitation team, and the scientists harnessing the power of technology to solve real-life challenges. If the progress to date is any indication, amputees of the future will find their dreams limited only by their imagination.

  10. Spin waves in rings of classical magnetic dipoles

    NASA Astrophysics Data System (ADS)

    Schmidt, Heinz-Jürgen; Schröder, Christian; Luban, Marshall

    2017-03-01

    We theoretically and numerically investigate spin waves that occur in systems of classical magnetic dipoles that are arranged at the vertices of a regular polygon and interact solely via their magnetic fields. There are certain limiting cases that can be analyzed in detail. One case is that of spin waves as infinitesimal excitations from the system’s ground state, where the dispersion relation can be determined analytically. The frequencies of these infinitesimal spin waves are compared with the peaks of the Fourier transform of the thermal expectation value of the autocorrelation function calculated by Monte Carlo simulations. In the special case of vanishing wave number an exact solution of the equations of motion is possible describing synchronized oscillations with finite amplitudes. Finally, the limiting case of a dipole chain with N\\longrightarrow ∞ is investigated and completely solved.

  11. Congestion transition in air traffic networks.

    PubMed

    Monechi, Bernardo; Servedio, Vito D P; Loreto, Vittorio

    2015-01-01

    Air Transportation represents a very interesting example of a complex techno-social system whose importance has considerably grown in time and whose management requires a careful understanding of the subtle interplay between technological infrastructure and human behavior. Despite the competition with other transportation systems, a growth of air traffic is still foreseen in Europe for the next years. The increase of traffic load could bring the current Air Traffic Network above its capacity limits so that safety standards and performances might not be guaranteed anymore. Lacking the possibility of a direct investigation of this scenario, we resort to computer simulations in order to quantify the disruptive potential of an increase in traffic load. To this end we model the Air Transportation system as a complex dynamical network of flights controlled by humans who have to solve potentially dangerous conflicts by redirecting aircraft trajectories. The model is driven and validated through historical data of flight schedules in a European national airspace. While correctly reproducing actual statistics of the Air Transportation system, e.g., the distribution of delays, the model allows for theoretical predictions. Upon an increase of the traffic load injected in the system, the model predicts a transition from a phase in which all conflicts can be successfully resolved, to a phase in which many conflicts cannot be resolved anymore. We highlight how the current flight density of the Air Transportation system is well below the transition, provided that controllers make use of a special re-routing procedure. While the congestion transition displays a universal scaling behavior, its threshold depends on the conflict solving strategy adopted. Finally, the generality of the modeling scheme introduced makes it a flexible general tool to simulate and control Air Transportation systems in realistic and synthetic scenarios.

  12. An adaptive confidence limit for periodic non-steady conditions fault detection

    NASA Astrophysics Data System (ADS)

    Wang, Tianzhen; Wu, Hao; Ni, Mengqi; Zhang, Milu; Dong, Jingjing; Benbouzid, Mohamed El Hachemi; Hu, Xiong

    2016-05-01

    System monitoring has become a major concern in batch process due to the fact that failure rate in non-steady conditions is much higher than in steady ones. A series of approaches based on PCA have already solved problems such as data dimensionality reduction, multivariable decorrelation, and processing non-changing signal. However, if the data follows non-Gaussian distribution or the variables contain some signal changes, the above approaches are not applicable. To deal with these concerns and to enhance performance in multiperiod data processing, this paper proposes a fault detection method using adaptive confidence limit (ACL) in periodic non-steady conditions. The proposed ACL method achieves four main enhancements: Longitudinal-Standardization could convert non-Gaussian sampling data to Gaussian ones; the multiperiod PCA algorithm could reduce dimensionality, remove correlation, and improve the monitoring accuracy; the adaptive confidence limit could detect faults under non-steady conditions; the fault sections determination procedure could select the appropriate parameter of the adaptive confidence limit. The achieved result analysis clearly shows that the proposed ACL method is superior to other fault detection approaches under periodic non-steady conditions.

  13. A Novel Complex-Coefficient In-Band Interference Suppression Algorithm for Cognitive Ultra-Wide Band Wireless Sensors Networks.

    PubMed

    Xiong, Hailiang; Zhang, Wensheng; Xu, Hongji; Du, Zhengfeng; Tang, Huaibin; Li, Jing

    2017-05-25

    With the rapid development of wireless communication systems and electronic techniques, the limited frequency spectrum resources are shared with various wireless devices, leading to a crowded and challenging coexistence circumstance. Cognitive radio (CR) and ultra-wide band (UWB), as sophisticated wireless techniques, have been considered as significant solutions to solve the harmonious coexistence issues. UWB wireless sensors can share the spectrum with primary user (PU) systems without harmful interference. The in-band interference of UWB systems should be considered because such interference can severely affect the transmissions of UWB wireless systems. In order to solve the in-band interference issues for UWB wireless sensor networks (WSN), a novel in-band narrow band interferences (NBIs) elimination scheme is proposed in this paper. The proposed narrow band interferences suppression scheme is based on a novel complex-coefficient adaptive notch filter unit with a single constrained zero-pole pair. Moreover, in order to reduce the computation complexity of the proposed scheme, an adaptive complex-coefficient iterative method based on two-order Taylor series is designed. To cope with multiple narrow band interferences, a linear cascaded high order adaptive filter and a cyclic cascaded high order matrix adaptive filter (CCHOMAF) interference suppression algorithm based on the basic adaptive notch filter unit are also presented. The theoretical analysis and numerical simulation results indicate that the proposed CCHOMAF algorithm can achieve better performance in terms of average bit error rate for UWB WSNs. The proposed in-band NBIs elimination scheme can significantly improve the reception performance of low-cost and low-power UWB wireless systems.

  14. A Novel Complex-Coefficient In-Band Interference Suppression Algorithm for Cognitive Ultra-Wide Band Wireless Sensors Networks

    PubMed Central

    Xiong, Hailiang; Zhang, Wensheng; Xu, Hongji; Du, Zhengfeng; Tang, Huaibin; Li, Jing

    2017-01-01

    With the rapid development of wireless communication systems and electronic techniques, the limited frequency spectrum resources are shared with various wireless devices, leading to a crowded and challenging coexistence circumstance. Cognitive radio (CR) and ultra-wide band (UWB), as sophisticated wireless techniques, have been considered as significant solutions to solve the harmonious coexistence issues. UWB wireless sensors can share the spectrum with primary user (PU) systems without harmful interference. The in-band interference of UWB systems should be considered because such interference can severely affect the transmissions of UWB wireless systems. In order to solve the in-band interference issues for UWB wireless sensor networks (WSN), a novel in-band narrow band interferences (NBIs) elimination scheme is proposed in this paper. The proposed narrow band interferences suppression scheme is based on a novel complex-coefficient adaptive notch filter unit with a single constrained zero-pole pair. Moreover, in order to reduce the computation complexity of the proposed scheme, an adaptive complex-coefficient iterative method based on two-order Taylor series is designed. To cope with multiple narrow band interferences, a linear cascaded high order adaptive filter and a cyclic cascaded high order matrix adaptive filter (CCHOMAF) interference suppression algorithm based on the basic adaptive notch filter unit are also presented. The theoretical analysis and numerical simulation results indicate that the proposed CCHOMAF algorithm can achieve better performance in terms of average bit error rate for UWB WSNs. The proposed in-band NBIs elimination scheme can significantly improve the reception performance of low-cost and low-power UWB wireless systems. PMID:28587085

  15. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  16. Design of Linear Quadratic Regulators and Kalman Filters

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Geyser, L.

    1986-01-01

    AESOP solves problems associated with design of controls and state estimators for linear time-invariant systems. Systems considered are modeled in state-variable form by set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are linear quadratic regulator (LQR) design problem and steady-state Kalman filter design problem. AESOP is interactive. User solves design problems and analyzes solutions in single interactive session. Both numerical and graphical information available to user during the session.

  17. Engineering Antifragile Systems: A Change In Design Philosophy

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.

    2014-01-01

    While technology has made astounding advances in the last century, problems are confronting the engineering community that must be solved. Cost and schedule of producing large systems are increasing at an unsustainable rate and these systems often do not perform as intended. New systems are required that may not be achieved by current methods. To solve these problems, NASA is working to infuse concepts from Complexity Science into the engineering process. Some of these problems may be solved by a change in design philosophy. Instead of designing systems to meet known requirements that will always lead to fragile systems at some degree, systems should be designed wherever possible to be antifragile: designing cognitive cyberphysical systems that can learn from their experience, adapt to unforeseen events they face in their environment, and grow stronger in the face of adversity. Several examples are presented of on ongoing research efforts to employ this philosophy.

  18. Student Learning of Complex Earth Systems: A Model to Guide Development of Student Expertise in Problem-Solving

    ERIC Educational Resources Information Center

    Holder, Lauren N.; Scherer, Hannah H.; Herbert, Bruce E.

    2017-01-01

    Engaging students in problem-solving concerning environmental issues in near-surface complex Earth systems involves developing student conceptualization of the Earth as a system and applying that scientific knowledge to the problems using practices that model those used by professionals. In this article, we review geoscience education research…

  19. Comparative Analysis.

    DTIC Science & Technology

    1987-11-01

    differential qualita- tive (DQ) analysis, which solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent...solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent tutoring systems, and explanation based...comparative analysis as an important component; the explanation is used in many different ways. * One way method of automated design is the principlvd

  20. MENDEL: An Intelligent Computer Tutoring System for Genetics Problem-Solving, Conjecturing, and Understanding.

    ERIC Educational Resources Information Center

    Streibel, Michael; And Others

    1987-01-01

    Describes an advice-giving computer system being developed for genetics education called MENDEL that is based on research in learning, genetics problem solving, and expert systems. The value of MENDEL as a design tool and the tutorial function are stressed. Hypothesis testing, graphics, and experiential learning are also discussed. (Author/LRW)

  1. Validation of Analytical Damping Ratio by Fatigue Stress Limit

    NASA Astrophysics Data System (ADS)

    Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul

    2018-03-01

    The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.

  2. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  3. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms

    PubMed Central

    Diamond, Alan; Nowotny, Thomas; Schmuker, Michael

    2016-01-01

    Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950

  4. Learning Impasses in Problem Solving

    NASA Technical Reports Server (NTRS)

    Hodgson, J. P. E.

    1992-01-01

    Problem Solving systems customarily use backtracking to deal with obstacles that they encounter in the course of trying to solve a problem. This paper outlines an approach in which the possible obstacles are investigated prior to the search for a solution. This provides a solution strategy that avoids backtracking.

  5. SEMINAR PUBLICATION: NATIONAL CONFERENCE ON ENVIRONMENTAL PROBLEM-SOLVING WITH GEOGRAPHIC INFORMATION SYSTEMS

    EPA Science Inventory

    The National Conference on Environmental Problem Solving with Geographic Information Systems was held in Cincinnati, Ohio, September 21-23, 1994. The conference was a forum for over 450 environmental professionals to exchange information and approaches on how to use geographic ...

  6. An Expert System Shell to Teach Problem Solving.

    ERIC Educational Resources Information Center

    Lippert, Renate C.

    1988-01-01

    Discusses the use of expert systems to teach problem-solving skills to students from grade 6 to college level. The role of computer technology in the future of education is considered, and the construction of knowledge bases is described, including an example for physics. (LRW)

  7. The application of hybrid artificial intelligence systems for forecasting

    NASA Astrophysics Data System (ADS)

    Lees, Brian; Corchado, Juan

    1999-03-01

    The results to date are presented from an ongoing investigation, in which the aim is to combine the strengths of different artificial intelligence methods into a single problem solving system. The premise underlying this research is that a system which embodies several cooperating problem solving methods will be capable of achieving better performance than if only a single method were employed. The work has so far concentrated on the combination of case-based reasoning and artificial neural networks. The relative merits of artificial neural networks and case-based reasoning problem solving paradigms, and their combination are discussed. The integration of these two AI problem solving methods in a hybrid systems architecture, such that the neural network provides support for learning from past experience in the case-based reasoning cycle, is then presented. The approach has been applied to the task of forecasting the variation of physical parameters of the ocean. Results obtained so far from tests carried out in the dynamic oceanic environment are presented.

  8. Using the Multiplicative Schwarz Alternating Algorithm (MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sharifi, M. A.; Amjadiparvar, B.

    2010-05-01

    The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low-Low Satellite-to-Satellite Tracking

  9. Translation among Symbolic Representations in Problem-Solving. Report on Studies Project: Alternative Strategies for Measuring Higher Order Skills: The Role of Symbol Systems.

    ERIC Educational Resources Information Center

    Shavelson, Richard J.; And Others

    Some aspects of the relationships among the symbolic representations (Rs) of problems given to students to solve, the Rs that students use to solve problems, and the accuracy of the solutions were studied. Focus was on determining: the mental Rs that students used while solving problems, the kinds of translation that takes place, the accuracy of…

  10. Photolithography diagnostic expert systems: a systematic approach to problem solving in a wafer fabrication facility

    NASA Astrophysics Data System (ADS)

    Weatherwax Scott, Caroline; Tsareff, Christopher R.

    1990-06-01

    One of the main goals of process engineering in the semiconductor industry is to improve wafer fabrication productivity and throughput. Engineers must work continuously toward this goal in addition to performing sustaining and development tasks. To accomplish these objectives, managers must make efficient use of engineering resources. One of the tools being used to improve efficiency is the diagnostic expert system. Expert systems are knowledge based computer programs designed to lead the user through the analysis and solution of a problem. Several photolithography diagnostic expert systems have been implemented at the Hughes Technology Center to provide a systematic approach to process problem solving. This systematic approach was achieved by documenting cause and effect analyses for a wide variety of processing problems. This knowledge was organized in the form of IF-THEN rules, a common structure for knowledge representation in expert system technology. These rules form the knowledge base of the expert system which is stored in the computer. The systems also include the problem solving methodology used by the expert when addressing a problem in his area of expertise. Operators now use the expert systems to solve many process problems without engineering assistance. The systems also facilitate the collection of appropriate data to assist engineering in solving unanticipated problems. Currently, several expert systems have been implemented to cover all aspects of the photolithography process. The systems, which have been in use for over a year, include wafer surface preparation (HMDS), photoresist coat and softbake, align and expose on a wafer stepper, and develop inspection. These systems are part of a plan to implement an expert system diagnostic environment throughout the wafer fabrication facility. In this paper, the systems' construction is described, including knowledge acquisition, rule construction, knowledge refinement, testing, and evaluation. The roles played by the process engineering expert and the knowledge engineer are discussed. The features of the systems are shown, particularly the interactive quality of the consultations and the ease of system use.

  11. An efficient chaotic maps-based authentication and key agreement scheme using smartcards for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu

    2013-12-01

    A smartcard-based authentication and key agreement scheme for telecare medicine information systems enables patients, doctors, nurses and health visitors to use smartcards for secure login to medical information systems. Authorized users can then efficiently access remote services provided by the medicine information systems through public networks. Guo and Chang recently improved the efficiency of a smartcard authentication and key agreement scheme by using chaotic maps. Later, Hao et al. reported that the scheme developed by Guo and Chang had two weaknesses: inability to provide anonymity and inefficient double secrets. Therefore, Hao et al. proposed an authentication scheme for telecare medicine information systems that solved these weaknesses and improved performance. However, a limitation in both schemes is their violation of the contributory property of key agreements. This investigation discusses these weaknesses and proposes a new smartcard-based authentication and key agreement scheme that uses chaotic maps for telecare medicine information systems. Compared to conventional schemes, the proposed scheme provides fewer weaknesses, better security, and more efficiency.

  12. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less

  13. Accelerating large scale Kohn-Sham density functional theory calculations with semi-local functionals and hybrid functionals

    NASA Astrophysics Data System (ADS)

    Lin, Lin

    The computational cost of standard Kohn-Sham density functional theory (KSDFT) calculations scale cubically with respect to the system size, which limits its use in large scale applications. In recent years, we have developed an alternative procedure called the pole expansion and selected inversion (PEXSI) method. The PEXSI method solves KSDFT without solving any eigenvalue and eigenvector, and directly evaluates physical quantities including electron density, energy, atomic force, density of states, and local density of states. The overall algorithm scales as at most quadratically for all materials including insulators, semiconductors and the difficult metallic systems. The PEXSI method can be efficiently parallelized over 10,000 - 100,000 processors on high performance machines. The PEXSI method has been integrated into a number of community electronic structure software packages such as ATK, BigDFT, CP2K, DGDFT, FHI-aims and SIESTA, and has been used in a number of applications with 2D materials beyond 10,000 atoms. The PEXSI method works for LDA, GGA and meta-GGA functionals. The mathematical structure for hybrid functional KSDFT calculations is significantly different. I will also discuss recent progress on using adaptive compressed exchange method for accelerating hybrid functional calculations. DOE SciDAC Program, DOE CAMERA Program, LBNL LDRD, Sloan Fellowship.

  14. A nearly-linear computational-cost scheme for the forward dynamics of an N-body pendulum

    NASA Technical Reports Server (NTRS)

    Chou, Jack C. K.

    1989-01-01

    The dynamic equations of motion of an n-body pendulum with spherical joints are derived to be a mixed system of differential and algebraic equations (DAE's). The DAE's are kept in implicit form to save arithmetic and preserve the sparsity of the system and are solved by the robust implicit integration method. At each solution point, the predicted solution is corrected to its exact solution within given tolerance using Newton's iterative method. For each iteration, a linear system of the form J delta X = E has to be solved. The computational cost for solving this linear system directly by LU factorization is O(n exp 3), and it can be reduced significantly by exploring the structure of J. It is shown that by recognizing the recursive patterns and exploiting the sparsity of the system the multiplicative and additive computational costs for solving J delta X = E are O(n) and O(n exp 2), respectively. The formulation and solution method for an n-body pendulum is presented. The computational cost is shown to be nearly linearly proportional to the number of bodies.

  15. Applications of artificial intelligence to digital photogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kretsch, J.L.

    1988-01-01

    The aim of this research was to explore the application of expert systems to digital photogrammetry, specifically to photogrammetric triangulation, feature extraction, and photogrammetric problem solving. In 1987, prototype expert systems were developed for doing system startup, interior orientation, and relative orientation in the mensuration stage. The system explored means of performing diagnostics during the process. In the area of feature extraction, the relationship of metric uncertainty to symbolic uncertainty was the topic of research. Error propagation through the Dempster-Shafer formalism for representing evidence was performed in order to find the variance in the calculated belief values due to errorsmore » in measurements made together the initial evidence needed to being labeling of observed image features with features in an object model. In photogrammetric problem solving, an expert system is under continuous development which seeks to solve photogrammetric problems using mathematical reasoning. The key to the approach used is the representation of knowledge directly in the form of equations, rather than in the form of if-then rules. Then each variable in the equations is treated as a goal to be solved.« less

  16. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  17. Teaching Mathematical Problem Solving to Students with Limited English Proficiency.

    ERIC Educational Resources Information Center

    Kaplan, Rochelle G.; Patino, Rodrigo A.

    Many mainstreamed students with limited English proficiency continue to face the difficulty of learning English as a second language (ESL) while studying mathematics and other content areas framed in the language of native speakers. The difficulty these students often encounter in mathematics classes and their poor performance on subsequent…

  18. Cognitive Predictors of Everyday Problem Solving across the Lifespan.

    PubMed

    Chen, Xi; Hertzog, Christopher; Park, Denise C

    2017-01-01

    An important aspect of successful aging is maintaining the ability to solve everyday problems encountered in daily life. The limited evidence today suggests that everyday problem solving ability increases from young adulthood to middle age, but decreases in older age. The present study examined age differences in the relative contributions of fluid and crystallized abilities to solving problems on the Everyday Problems Test (EPT). We hypothesized that due to diminishing fluid resources available with advanced age, crystallized knowledge would become increasingly important in predicting everyday problem solving with greater age. Two hundred and twenty-one healthy adults from the Dallas Lifespan Brain Study, aged 24-93 years, completed a cognitive battery that included measures of fluid ability (i.e., processing speed, working memory, inductive reasoning) and crystallized ability (i.e., multiple measures of vocabulary). These measures were used to predict performance on EPT. Everyday problem solving showed an increase in performance from young to early middle age, with performance beginning to decrease at about age of 50 years. As hypothesized, fluid ability was the primary predictor of performance on everyday problem solving for young adults, but with increasing age, crystallized ability became the dominant predictor. This study provides evidence that everyday problem solving ability differs with age, and, more importantly, that the processes underlying it differ with age as well. The findings indicate that older adults increasingly rely on knowledge to support everyday problem solving, whereas young adults rely almost exclusively on fluid intelligence. © 2017 S. Karger AG, Basel.

  19. Use of Invariant Manifolds for Transfers Between Three-Body Systems

    NASA Technical Reports Server (NTRS)

    Beckman, Mark; Howell, Kathleen

    2003-01-01

    The Lunar L1 and L2 libration points have been proposed as gateways granting inexpensive access to interplanetary space. To date, only individual solutions to the transfer between three-body systems have been found. The methodology to solve the problem for arbitrary three-body systems and entire families of orbits does not exist. This paper presents the initial approaches to solve the general problem for single and multiple impulse transfers. Two different methods of representing and storing 7-dimensional invariant manifold data are presented. Some particular solutions are presented for the transfer problem, though the emphasis is on developing methodology for solving the general problem.

  20. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moryakov, A. V., E-mail: sailor@orc.ru

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  1. Representations of Invariant Manifolds for Applications in Three-Body Systems

    NASA Technical Reports Server (NTRS)

    Howell, K.; Beckman, M.; Patterson, C.; Folta, D.

    2004-01-01

    The Lunar L1 and L2 libration points have been proposed as gateways granting inexpensive access to interplanetary space. To date, only individual solutions to the transfer between three-body systems have been found. The methodology to solve the problem for arbitrary three-body systems and entire families of orbits is currently being studied. This paper presents an initial approach to solve the general problem for single and multiple impulse transfers. Two different methods of representing and storing the invariant manifold data are presented. Some particular solutions are presented for two types of transfer problems, though the emphasis is on developing the methodology for solving the general problem.

  2. On a new iterative method for solving linear systems and comparison results

    NASA Astrophysics Data System (ADS)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  3. A numerical scheme to solve unstable boundary value problems

    NASA Technical Reports Server (NTRS)

    Kalnay Derivas, E.

    1975-01-01

    A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.

  4. Problem Solving Under Time-Constraints.

    ERIC Educational Resources Information Center

    Richardson, Michael; Hunt, Earl

    A model of how automated and controlled processing can be mixed in computer simulations of problem solving is proposed. It is based on previous work by Hunt and Lansman (1983), who developed a model of problem solving that could reproduce the data obtained with several attention and performance paradigms, extending production-system notation to…

  5. Solving Equations of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Lim, Christopher

    2007-01-01

    Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.

  6. Metric versus observable operator representation, higher spin models

    NASA Astrophysics Data System (ADS)

    Fring, Andreas; Frith, Thomas

    2018-02-01

    We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.

  7. Characterization and Developmental History of Problem Solving Methods in Medicine

    PubMed Central

    Harbort, Robert A.

    1980-01-01

    The central thesis of this paper is the importance of the framework in which information is structured. It is technically important in the design of systems; it is also important in guaranteeing that systems are usable by clinicians. Progress in medical computing depends on our ability to develop a more quantitative understanding of the role of context in our choice of problem solving techniques. This in turn will help us to design more flexible and responsive computer systems. The paper contains an overview of some models of knowledge and problem solving methods, a characterization of modern diagnostic techniques, and a discussion of skill development in medical practice. Diagnostic techniques are examined in terms of how they are taught, what problem solving methods they use, and how they fit together into an overall theory of interpretation of the medical status of a patient.

  8. Models of resource allocation optimization when solving the control problems in organizational systems

    NASA Astrophysics Data System (ADS)

    Menshikh, V.; Samorokovskiy, A.; Avsentev, O.

    2018-03-01

    The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.

  9. Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coon, Ethan; Porter, Mark L.; Kang, Qinjun

    2012-06-18

    In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less

  10. Computer Systems for Teaching Complex Concepts.

    ERIC Educational Resources Information Center

    Feurzeig, Wallace

    Four Programing systems--Mentor, Stringcomp, Simon, and Logo--were designed and implemented as integral parts of research into the various ways computers may be used for teaching problem-solving concepts and skills. Various instructional contexts, among them medicine, mathematics, physics, and basic problem-solving for elementary school children,…

  11. ENVIRONMENTAL PROBLEM SOLVING WITH GEOGRAPHIC INFORMATION SYSTEMS: 1994 AND 1999 CONFERENCE PROCEEDINGS

    EPA Science Inventory

    These two national conferences, held in Cincinnati, Ohio in 1994 and 1999, addressed the area of environmental problem solving with Geographic Information Systems. This CD-ROM is a compilation of the proceedings in PDF format. The emphasis of the conference presentations were on ...

  12. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning.

    PubMed

    Custers, Eugène J F M

    2013-08-01

    Recently, human reasoning, problem solving, and decision making have been viewed as products of two separate systems: "System 1," the unconscious, intuitive, or nonanalytic system, and "System 2," the conscious, analytic, or reflective system. This view has penetrated the medical education literature, yet the idea of two independent dichotomous cognitive systems is not entirely without problems.This article outlines the difficulties of this "two-system view" and presents an alternative, developed by K.R. Hammond and colleagues, called cognitive continuum theory (CCT). CCT is featured by three key assumptions. First, human reasoning, problem solving, and decision making can be arranged on a cognitive continuum, with pure intuition at one end, pure analysis at the other, and a large middle ground called "quasirationality." Second, the nature and requirements of the cognitive task, as perceived by the person performing the task, determine to a large extent whether a task will be approached more intuitively or more analytically. Third, for optimal task performance, this approach needs to match the cognitive properties and requirements of the task. Finally, the author makes a case that CCT is better able than a two-system view to describe medical problem solving and clinical reasoning and that it provides clear clues for how to organize training in clinical reasoning.

  13. Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.

    PubMed Central

    Musen, M. A.

    1998-01-01

    When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community. PMID:9929181

  14. Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.

    PubMed

    Musen, M A

    1998-01-01

    When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  15. Singular perturbation solutions of steady-state Poisson-Nernst-Planck systems.

    PubMed

    Wang, Xiang-Sheng; He, Dongdong; Wylie, Jonathan J; Huang, Huaxiong

    2014-02-01

    We study the Poisson-Nernst-Planck (PNP) system with an arbitrary number of ion species with arbitrary valences in the absence of fixed charges. Assuming point charges and that the Debye length is small relative to the domain size, we derive an asymptotic formula for the steady-state solution by matching outer and boundary layer solutions. The case of two ionic species has been extensively studied, the uniqueness of the solution has been proved, and an explicit expression for the solution has been obtained. However, the case of three or more ions has received significantly less attention. Previous work has indicated that the solution may be nonunique and that even obtaining numerical solutions is a difficult task since one must solve complicated systems of nonlinear equations. By adopting a methodology that preserves the symmetries of the PNP system, we show that determining the outer solution effectively reduces to solving a single scalar transcendental equation. Due to the simple form of the transcendental equation, it can be solved numerically in a straightforward manner. Our methodology thus provides a standard procedure for solving the PNP system and we illustrate this by solving some practical examples. Despite the fact that for three ions, previous studies have indicated that multiple solutions may exist, we show that all except for one of these solutions are unphysical and thereby prove the existence and uniqueness for the three-ion case.

  16. MSC/NASTRAN DMAP Alter Used for Closed-Form Static Analysis With Inertia Relief and Displacement-Dependent Loads

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Solving for the displacements of free-free coupled systems acted upon by static loads is a common task in the aerospace industry. Often, these problems are solved by static analysis with inertia relief. This technique allows for a free-free static analysis by balancing the applied loads with the inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus the displacement-dependent loads. A launch vehicle being acted upon by an aerodynamic loading can have such applied loads. The final displacements of such systems are commonly determined with iterative solution techniques. Unfortunately, these techniques can be time consuming and labor intensive. Because the coupled system equations for free-free systems with displacement-dependent loads can be written in closed form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. An MSC/NASTRAN (MacNeal-Schwendler Corporation/NASA Structural Analysis) DMAP (Direct Matrix Abstraction Program) Alter was used to include displacement-dependent loads in static analysis with inertia relief. It efficiently solved a common aerospace problem that typically has been solved with an iterative technique.

  17. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  18. A Flowchart-Based Intelligent Tutoring System for Improving Problem-Solving Skills of Novice Programmers

    ERIC Educational Resources Information Center

    Hooshyar, D.; Ahmad, R. B.; Yousefi, M.; Yusop, F. D.; Horng, S.-J.

    2015-01-01

    Intelligent tutoring and personalization are considered as the two most important factors in the research of learning systems and environments. An effective tool that can be used to improve problem-solving ability is an Intelligent Tutoring System which is capable of mimicking a human tutor's actions in implementing a one-to-one personalized and…

  19. A Study of Multi-Representation of Geometry Problem Solving with Virtual Manipulatives and Whiteboard System

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Su, Jia-Han; Huang, Yueh-Min; Dong, Jian-Jie

    2009-01-01

    In this paper, the development of an innovative Virtual Manipulatives and Whiteboard (VMW) system is described. The VMW system allowed users to manipulate virtual objects in 3D space and find clues to solve geometry problems. To assist with multi-representation transformation, translucent multimedia whiteboards were used to provide a virtual 3D…

  20. Solving a System of Nonlinear Algebraic Equations You Only Get Error Messages--What to Do Next?

    ERIC Educational Resources Information Center

    Shacham, Mordechai; Brauner, Neima

    2017-01-01

    Chemical engineering problems often involve the solution of systems of nonlinear algebraic equations (NLE). There are several software packages that can be used for solving NLE systems, but they may occasionally fail, especially in cases where the mathematical model contains discontinuities and/or regions where some of the functions are undefined.…

Top