Sample records for problem full solution

  1. An algorithm for full parametric solution of problems on the statics of orthotropic plates by the method of boundary states with perturbations

    NASA Astrophysics Data System (ADS)

    Penkov, V. B.; Ivanychev, D. A.; Novikova, O. S.; Levina, L. V.

    2018-03-01

    The article substantiates the possibility of building full parametric analytical solutions of mathematical physics problems in arbitrary regions by means of computer systems. The suggested effective means for such solutions is the method of boundary states with perturbations, which aptly incorporates all parameters of an orthotropic medium in a general solution. We performed check calculations of elastic fields of an anisotropic rectangular region (test and calculation problems) for a generalized plane stress state.

  2. North Dakota's Centennial Quilt and Problem Solvers: Solutions: The Library Problem

    ERIC Educational Resources Information Center

    Small, Marian

    2010-01-01

    Quilt investigations, such as the Barn quilt problem in the December 2008/January 2009 issue of "Teaching Children Mathematics" and its solutions in last month's issue, can spark interdisciplinary pursuits for teachers and exciting connections for the full range of elementary school students. This month, North Dakota's centennial quilt…

  3. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  4. Optimal solution of full fuzzy transportation problems using total integral ranking

    NASA Astrophysics Data System (ADS)

    Sam’an, M.; Farikhin; Hariyanto, S.; Surarso, B.

    2018-03-01

    Full fuzzy transportation problem (FFTP) is a transportation problem where transport costs, demand, supply and decision variables are expressed in form of fuzzy numbers. To solve fuzzy transportation problem, fuzzy number parameter must be converted to a crisp number called defuzzyfication method. In this new total integral ranking method with fuzzy numbers from conversion of trapezoidal fuzzy numbers to hexagonal fuzzy numbers obtained result of consistency defuzzyfication on symmetrical fuzzy hexagonal and non symmetrical type 2 numbers with fuzzy triangular numbers. To calculate of optimum solution FTP used fuzzy transportation algorithm with least cost method. From this optimum solution, it is found that use of fuzzy number form total integral ranking with index of optimism gives different optimum value. In addition, total integral ranking value using hexagonal fuzzy numbers has an optimal value better than the total integral ranking value using trapezoidal fuzzy numbers.

  5. Comparison of application of various crossovers in solving inhomogeneous minimax problem modified by Goldberg model

    NASA Astrophysics Data System (ADS)

    Kobak, B. V.; Zhukovskiy, A. G.; Kuzin, A. P.

    2018-05-01

    This paper considers one of the classical NP complete problems - an inhomogeneous minimax problem. When solving such large-scale problem, there appear difficulties in obtaining an exact solution. Therefore, let us propose getting an optimum solution in an acceptable time. Among a wide range of genetic algorithm models, let us choose the modified Goldberg model, which earlier was successfully used by authors in solving NP complete problems. The classical Goldberg model uses a single-point crossover and a singlepoint mutation, which somewhat decreases the accuracy of the obtained results. In the article, let us propose using a full two-point crossover with various mutations previously researched. In addition, the work studied the necessary probability to apply it to the crossover in order to obtain results that are more accurate. Results of the computation experiment showed that the higher the probability of a crossover, the higher the quality of both the average results and the best solutions. In addition, it was found out that the higher the values of the number of individuals and the number of repetitions, the closer both the average results and the best solutions to the optimum. The paper shows how the use of a full two-point crossover increases the accuracy of solving an inhomogeneous minimax problem, while the time for getting the solution increases, but remains polynomial.

  6. Solutions to an advanced functional partial differential equation of the pantograph type

    PubMed Central

    Zaidi, Ali A.; Van Brunt, B.; Wake, G. C.

    2015-01-01

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained. PMID:26345391

  7. Solutions to an advanced functional partial differential equation of the pantograph type.

    PubMed

    Zaidi, Ali A; Van Brunt, B; Wake, G C

    2015-07-08

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained.

  8. An algorithm for analytical solution of basic problems featuring elastostatic bodies with cavities and surface flaws

    NASA Astrophysics Data System (ADS)

    Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.

    2018-03-01

    Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.

  9. Method of Harmonic Balance in Full-Scale-Model Tests of Electrical Devices

    NASA Astrophysics Data System (ADS)

    Gorbatenko, N. I.; Lankin, A. M.; Lankin, M. V.

    2017-01-01

    Methods for determining the weber-ampere characteristics of electrical devices, one of which is based on solution of direct problem of harmonic balance and the other on solution of inverse problem of harmonic balance by the method of full-scale-model tests, are suggested. The mathematical model of the device is constructed using the describing function and simplex optimization methods. The presented results of experimental applications of the method show its efficiency. The advantage of the method is the possibility of application for nondestructive inspection of electrical devices in the processes of their production and operation.

  10. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  11. A new modal-based approach for modelling the bump foil structure in the simultaneous solution of foil-air bearing rotor dynamic problems

    NASA Astrophysics Data System (ADS)

    Bin Hassan, M. F.; Bonello, P.

    2017-05-01

    Recently-proposed techniques for the simultaneous solution of foil-air bearing (FAB) rotor dynamic problems have been limited to a simple bump foil model in which the individual bumps were modelled as independent spring-damper (ISD) subsystems. The present paper addresses this limitation by introducing a modal model of the bump foil structure into the simultaneous solution scheme. The dynamics of the corrugated bump foil structure are first studied using the finite element (FE) technique. This study is experimentally validated using a purpose-made corrugated foil structure. Based on the findings of this study, it is proposed that the dynamics of the full foil structure, including bump interaction and foil inertia, can be represented by a modal model comprising a limited number of modes. This full foil structure modal model (FFSMM) is then adapted into the rotordynamic FAB problem solution scheme, instead of the ISD model. Preliminary results using the FFSMM under static and unbalance excitation conditions are proven to be reliable by comparison against the corresponding ISD foil model results and by cross-correlating different methods for computing the deflection of the full foil structure. The rotor-bearing model is also validated against experimental and theoretical results in the literature.

  12. Angular spectral framework to test full corrections of paraxial solutions.

    PubMed

    Mahillo-Isla, R; González-Morales, M J

    2015-07-01

    Different correction methods for paraxial solutions have been used when such solutions extend out of the paraxial regime. The authors have used correction methods guided by either their experience or some educated hypothesis pertinent to the particular problem that they were tackling. This article provides a framework so as to classify full wave correction schemes. Thus, for a given solution of the paraxial wave equation, we can select the best correction scheme of those available. Some common correction methods are considered and evaluated under the proposed scope. Another remarkable contribution is obtained by giving the necessary conditions that two solutions of the Helmholtz equation must accomplish to accept a common solution of the parabolic wave equation as a paraxial approximation of both solutions.

  13. Revisiting software specification and design for large astronomy projects

    NASA Astrophysics Data System (ADS)

    Wiant, Scott; Berukoff, Steven

    2016-07-01

    The separation of science and engineering in the delivery of software systems overlooks the true nature of the problem being solved and the organization that will solve it. Use of a systems engineering approach to managing the requirements flow between these two groups as between a customer and contractor has been used with varying degrees of success by well-known entities such as the U.S. Department of Defense. However, treating science as the customer and engineering as the contractor fosters unfavorable consequences that can be avoided and opportunities that are missed. For example, the "problem" being solved is only partially specified through the requirements generation process since it focuses on detailed specification guiding the parties to a technical solution. Equally important is the portion of the problem that will be solved through the definition of processes and staff interacting through them. This interchange between people and processes is often underrepresented and under appreciated. By concentrating on the full problem and collaborating on a strategy for its solution a science-implementing organization can realize the benefits of driving towards common goals (not just requirements) and a cohesive solution to the entire problem. The initial phase of any project when well executed is often the most difficult yet most critical and thus it is essential to employ a methodology that reinforces collaboration and leverages the full suite of capabilities within the team. This paper describes an integrated approach to specifying the needs induced by a problem and the design of its solution.

  14. Eshelby problem of polygonal inclusions in anisotropic piezoelectric full- and half-planes

    NASA Astrophysics Data System (ADS)

    Pan, E.

    2004-03-01

    This paper presents an exact closed-form solution for the Eshelby problem of polygonal inclusion in anisotropic piezoelectric full- and half-planes. Based on the equivalent body-force concept of eigenstrain, the induced elastic and piezoelectric fields are first expressed in terms of line integral on the boundary of the inclusion with the integrand being the Green's function. Using the recently derived exact closed-form line-source Green's function, the line integral is then carried out analytically, with the final expression involving only elementary functions. The exact closed-form solution is applied to a square-shaped quantum wire within semiconductor GaAs full- and half-planes, with results clearly showing the importance of material orientation and piezoelectric coupling. While the elastic and piezoelectric fields within the square-shaped quantum wire could serve as benchmarks to other numerical methods, the exact closed-form solution should be useful to the analysis of nanoscale quantum-wire structures where large strain and electric fields could be induced by the misfit strain.

  15. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  16. The Nature of Problem Solving: Using Research to Inspire 21st Century Learning

    ERIC Educational Resources Information Center

    Csapó, Beno, Ed.; Funke, Joachim, Ed.

    2017-01-01

    Solving non-routine problems is a key competence in a world full of changes, uncertainty and surprise where we strive to achieve so many ambitious goals. But the world is also full of solutions because of the extraordinary competences of humans who search for and find them. We must explore the world around us in a thoughtful way, acquire knowledge…

  17. Thin-layer and full Navier-Stokes calculations for turbulent supersonic flow over a cone at an angle of attack

    NASA Technical Reports Server (NTRS)

    Smith, Crawford F.; Podleski, Steve D.

    1993-01-01

    The proper use of a computational fluid dynamics code requires a good understanding of the particular code being applied. In this report the application of CFL3D, a thin-layer Navier-Stokes code, is compared with the results obtained from PARC3D, a full Navier-Stokes code. In order to gain an understanding of the use of this code, a simple problem was chosen in which several key features of the code could be exercised. The problem chosen is a cone in supersonic flow at an angle of attack. The issues of grid resolution, grid blocking, and multigridding with CFL3D are explored. The use of multigridding resulted in a significant reduction in the computational time required to solve the problem. Solutions obtained are compared with the results using the full Navier-Stokes equations solver PARC3D. The results obtained with the CFL3D code compared well with the PARC3D solutions.

  18. The Riemann problem for the relativistic full Euler system with generalized Chaplygin proper energy density-pressure relation

    NASA Astrophysics Data System (ADS)

    Shao, Zhiqiang

    2018-04-01

    The relativistic full Euler system with generalized Chaplygin proper energy density-pressure relation is studied. The Riemann problem is solved constructively. The delta shock wave arises in the Riemann solutions, provided that the initial data satisfy some certain conditions, although the system is strictly hyperbolic and the first and third characteristic fields are genuinely nonlinear, while the second one is linearly degenerate. There are five kinds of Riemann solutions, in which four only consist of a shock wave and a centered rarefaction wave or two shock waves or two centered rarefaction waves, and a contact discontinuity between the constant states (precisely speaking, the solutions consist in general of three waves), and the other involves delta shocks on which both the rest mass density and the proper energy density simultaneously contain the Dirac delta function. It is quite different from the previous ones on which only one state variable contains the Dirac delta function. The formation mechanism, generalized Rankine-Hugoniot relation and entropy condition are clarified for this type of delta shock wave. Under the generalized Rankine-Hugoniot relation and entropy condition, we establish the existence and uniqueness of solutions involving delta shocks for the Riemann problem.

  19. Immediate and Sustained Effects of Planning in a Problem-Solving Task

    ERIC Educational Resources Information Center

    Delaney, Peter F.; Ericsson, K. Anders; Knowles, Martin E.

    2004-01-01

    In 4 experiments, instructions to plan a task (water jugs) that normally produces little planning altered how participants solved the problems and resulted in enhanced learning and memory. Experiment 1 identified planning strategies that allowed participants to plan full solutions to water jugs problems. Experiment 2 showed that experience with…

  20. On the Vanishing Dissipation Limit for the Full Navier-Stokes-Fourier System with Non-slip Condition

    NASA Astrophysics Data System (ADS)

    Wang, Y.-G.; Zhu, S.-Y.

    2018-06-01

    In this paper, we study the vanishing dissipation limit problem for the full Navier-Stokes-Fourier equations with non-slip boundary condition in a smooth bounded domain Ω \\subseteq R3. By using Kato's idea (Math Sci Res Inst Publ 2:85-98, 1984) of constructing an artificial boundary layer, we obtain a sufficient condition for the convergence of the solution of the full Navier-Stokes-Fourier equations to the solution of the compressible Euler equations in the energy space L2(Ω ) uniformly in time.

  1. Root finding in the complex plane for seismo-acoustic propagation scenarios with Green's function solutions.

    PubMed

    McCollom, Brittany A; Collis, Jon M

    2014-09-01

    A normal mode solution to the ocean acoustic problem of the Pekeris waveguide with an elastic bottom using a Green's function formulation for a compressional wave point source is considered. Analytic solutions to these types of waveguide propagation problems are strongly dependent on the eigenvalues of the problem; these eigenvalues represent horizontal wavenumbers, corresponding to propagating modes of energy. The eigenvalues arise as singularities in the inverse Hankel transform integral and are specified by roots to a characteristic equation. These roots manifest themselves as poles in the inverse transform integral and can be both subtle and difficult to determine. Following methods previously developed [S. Ivansson et al., J. Sound Vib. 161 (1993)], a root finding routine has been implemented using the argument principle. Using the roots to the characteristic equation in the Green's function formulation, full-field solutions are calculated for scenarios where an acoustic source lies in either the water column or elastic half space. Solutions are benchmarked against laboratory data and existing numerical solutions.

  2. Eshelby's problem of polygonal inclusions with polynomial eigenstrains in an anisotropic magneto-electro-elastic full plane

    PubMed Central

    Lee, Y.-G.; Zou, W.-N.; Pan, E.

    2015-01-01

    This paper presents a closed-form solution for the arbitrary polygonal inclusion problem with polynomial eigenstrains of arbitrary order in an anisotropic magneto-electro-elastic full plane. The additional displacements or eigendisplacements, instead of the eigenstrains, are assumed to be a polynomial with general terms of order M+N. By virtue of the extended Stroh formulism, the induced fields are expressed in terms of a group of basic functions which involve boundary integrals of the inclusion domain. For the special case of polygonal inclusions, the boundary integrals are carried out explicitly, and their averages over the inclusion are also obtained. The induced fields under quadratic eigenstrains are mostly analysed in terms of figures and tables, as well as those under the linear and cubic eigenstrains. The connection between the present solution and the solution via the Green's function method is established and numerically verified. The singularity at the vertices of the arbitrary polygon is further analysed via the basic functions. The general solution and the numerical results for the constant, linear, quadratic and cubic eigenstrains presented in this paper enable us to investigate the features of the inclusion and inhomogeneity problem concerning polynomial eigenstrains in semiconductors and advanced composites, while the results can further serve as benchmarks for future analyses of Eshelby's inclusion problem. PMID:26345141

  3. Preconditioned conjugate residual methods for the solution of spectral equations

    NASA Technical Reports Server (NTRS)

    Wong, Y. S.; Zang, T. A.; Hussaini, M. Y.

    1986-01-01

    Conjugate residual methods for the solution of spectral equations are described. An inexact finite-difference operator is introduced as a preconditioner in the iterative procedures. Application of these techniques is limited to problems for which the symmetric part of the coefficient matrix is positive definite. Although the spectral equation is a very ill-conditioned and full matrix problem, the computational effort of the present iterative methods for solving such a system is comparable to that for the sparse matrix equations obtained from the application of either finite-difference or finite-element methods to the same problems. Numerical experiments are shown for a self-adjoint elliptic partial differential equation with Dirichlet boundary conditions, and comparison with other solution procedures for spectral equations is presented.

  4. Achieving network level privacy in Wireless Sensor Networks.

    PubMed

    Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2010-01-01

    Full network level privacy has often been categorized into four sub-categories: Identity, Route, Location and Data privacy. Achieving full network level privacy is a critical and challenging problem due to the constraints imposed by the sensor nodes (e.g., energy, memory and computation power), sensor networks (e.g., mobility and topology) and QoS issues (e.g., packet reach-ability and timeliness). In this paper, we proposed two new identity, route and location privacy algorithms and data privacy mechanism that addresses this problem. The proposed solutions provide additional trustworthiness and reliability at modest cost of memory and energy. Also, we proved that our proposed solutions provide protection against various privacy disclosure attacks, such as eavesdropping and hop-by-hop trace back attacks.

  5. Achieving Network Level Privacy in Wireless Sensor Networks†

    PubMed Central

    Shaikh, Riaz Ahmed; Jameel, Hassan; d’Auriol, Brian J.; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2010-01-01

    Full network level privacy has often been categorized into four sub-categories: Identity, Route, Location and Data privacy. Achieving full network level privacy is a critical and challenging problem due to the constraints imposed by the sensor nodes (e.g., energy, memory and computation power), sensor networks (e.g., mobility and topology) and QoS issues (e.g., packet reach-ability and timeliness). In this paper, we proposed two new identity, route and location privacy algorithms and data privacy mechanism that addresses this problem. The proposed solutions provide additional trustworthiness and reliability at modest cost of memory and energy. Also, we proved that our proposed solutions provide protection against various privacy disclosure attacks, such as eavesdropping and hop-by-hop trace back attacks. PMID:22294881

  6. Numerical solution of the full potential equation using a chimera grid approach

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1995-01-01

    A numerical scheme utilizing a chimera zonal grid approach for solving the full potential equation in two spatial dimensions is described. Within each grid zone a fully-implicit approximate factorization scheme is used to advance the solution one interaction. This is followed by the explicit advance of all common zonal grid boundaries using a bilinear interpolation of the velocity potential. The presentation is highlighted with numerical results simulating the flow about a two-dimensional, nonlifting, circular cylinder. For this problem, the flow domain is divided into two parts: an inner portion covered by a polar grid and an outer portion covered by a Cartesian grid. Both incompressible and compressible (transonic) flow solutions are included. Comparisons made with an analytic solution as well as single grid results indicate that the chimera zonal grid approach is a viable technique for solving the full potential equation.

  7. Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Corban, J. E.

    1990-01-01

    The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.

  8. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  9. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  10. Single Polygon Counting on Cayley Tree of Order 3

    NASA Astrophysics Data System (ADS)

    Pah, Chin Hee

    2010-07-01

    We showed that one form of generalized Catalan numbers is the solution to the problem of finding different connected component with finite vertices containing a fixed root for the semi-infinite Cayley tree of order 3. We give the formula for the full graph, Cayley tree of order 3 which is derived from the generalized Catalan numbers. Using ratios of Gamma functions, two upper bounds are given for problem defined on semi-infinite Cayley tree of order 3 as well as the full graph.

  11. Test problems for inviscid transonic flow

    NASA Technical Reports Server (NTRS)

    Carlson, L. A.

    1979-01-01

    Solving of test problems with the TRANDES program is discussed. This method utilizes the full, inviscid, perturbation potential flow equation in a Cartesian grid system that is stretched to infinity. This equation is represented by a nonconservative system of finite difference equations that includes at supersonic points a rotated difference scheme and is solved by column relaxation. The solution usually starts from a zero perturbation potential on a very coarse grid (typically 13 by 7) followed by several grid halvings until a final solution is obtained on a fine grid (97 by 49).

  12. Hearings Before the Subcommittee on Advanced Research and Technology of the Committee on Science and Astronautics. U.S. House of Representatives, Ninety-First Congress, First Session

    DTIC Science & Technology

    1969-12-11

    Procurement Regulations. As a result, the contractor’s commercial business bears its full share of IRAD cost and, in addi- tion, the contractor generally...utilize the equipment profitably during the less busy periods. (Slide off). Now It would be presumptious for me to try to suggest solutions to these...spe- cific isstes within the broad categories of problems that have been identified. The solution of these specific problems hopefully will lead to a

  13. Inquiry and Critical Thinking in an Elementary Art Program

    ERIC Educational Resources Information Center

    Lampert, Nancy

    2013-01-01

    Critical thinking is thought-focused on how to solve a well-defined problem when several alternatives solutions to the problem exist. Because critical thinking may help to build tolerance toward others, the author believes it is a worthwhile subject to investigate, given that people are living in an increasingly multicultural society full of…

  14. Collisional breakup in a quantum system of three charged particles

    PubMed

    Rescigno; Baertschy; Isaacs; McCurdy

    1999-12-24

    Since the invention of quantum mechanics, even the simplest example of the collisional breakup of a system of charged particles, e(-) + H --> H(+) + e(-) + e(-) (where e(-) is an electron and H is hydrogen), has resisted solution and is now one of the last unsolved fundamental problems in atomic physics. A complete solution requires calculation of the energies and directions for a final state in which all three particles are moving away from each other. Even with supercomputers, the correct mathematical description of this state has proved difficult to apply. A framework for solving ionization problems in many areas of chemistry and physics is finally provided by a mathematical transformation of the Schrodinger equation that makes the final state tractable, providing the key to a numerical solution of this problem that reveals its full dynamics.

  15. Page turning solutions for musicians: a survey.

    PubMed

    Wolberg, George; Schipper, Irene

    2012-01-01

    Musicians have long been hampered by the challenge in turning sheet music while their hands are occupied playing an instrument. The sight of a human page turner assisting a pianist during a performance, for instance, is not uncommon. This need for a page turning solution is no less acute during practice sessions, which account for the vast majority of playing time. Despite widespread appreciation of the problem, there have been virtually no robust and affordable products to assist the musician. Recent progress in assistive technology and electronic reading devices offers promising solutions to this long-standing problem. The objective of this paper is to survey the technology landscape and assess the benefits and drawbacks of page turning solutions for musicians. A full range of mechanical and digital page turning products are reviewed.

  16. Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions

    NASA Astrophysics Data System (ADS)

    Alemany, Kristina

    Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables---launch date, time of flight, and asteroid stay times (when applicable)---as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.

  17. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  18. A comparative study of full Navier-Stokes and Reduced Navier-Stokes analyses for separating flows within a diffusing inlet S-duct

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Reddy, D. R.; Kapoor, K.

    1993-01-01

    A three-dimensional implicit Full Navier-Stokes (FNS) analysis and a 3D Reduced Navier-Stokes (RNS) initial value space marching solution technique has been applied to a class of separate flow problems within a diffusing S-duct configuration characterized as vortex-liftoff. Both Full Navier-Stokes and Reduced Navier-Stokes solution techniques were able to capture the overall flow physics of vortex lift-off, however more consideration must be given to the development of turbulence models for the prediction of the locations of separation and reattachment. This accounts for some of the discrepancies in the prediction of the relevant inlet distortion descriptors, particularly circumferential distortion. The 3D RNS solution technique adequately described the topological structure of flow separation associated with vortex lift-off.

  19. Elementary solutions of coupled model equations in the kinetic theory of gases

    NASA Technical Reports Server (NTRS)

    Kriese, J. T.; Siewert, C. E.; Chang, T. S.

    1974-01-01

    The method of elementary solutions is employed to solve two coupled integrodifferential equations sufficient for determining temperature-density effects in a linearized BGK model in the kinetic theory of gases. Full-range completeness and orthogonality theorems are proved for the developed normal modes and the infinite-medium Green's function is constructed as an illustration of the full-range formalism. The appropriate homogeneous matrix Riemann problem is discussed, and half-range completeness and orthogonality theorems are proved for a certain subset of the normal modes. The required existence and uniqueness theorems relevant to the H matrix, basic to the half-range analysis, are proved, and an accurate and efficient computational method is discussed. The half-space temperature-slip problem is solved analytically, and a highly accurate value of the temperature-slip coefficient is reported.

  20. The Shape of a Sausage: A Challenging Problem in the Calculus of Variations

    ERIC Educational Resources Information Center

    Deakin, Michael A. B.

    2010-01-01

    Many familiar household objects (such as sausages) involve the maximization of a volume under geometric constraints. A flexible but inextensible membrane bounds a volume which is to be filled to capacity. In the case of the sausage, a full analytic solution is here provided. Other related but more difficult problems seem to demand approximate…

  1. Optimal placement of multiple types of communicating sensors with availability and coverage redundancy constraints

    NASA Astrophysics Data System (ADS)

    Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.

    2010-04-01

    Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.

  2. Homotopy approach to optimal, linear quadratic, fixed architecture compensation

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1991-01-01

    Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.

  3. Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Lopez, Nicolas

    This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.

  4. Highly eccentric hip-hop solutions of the 2 N-body problem

    NASA Astrophysics Data System (ADS)

    Barrabés, Esther; Cors, Josep M.; Pinyol, Conxita; Soler, Jaume

    2010-02-01

    We show the existence of families of hip-hop solutions in the equal-mass 2 N-body problem which are close to highly eccentric planar elliptic homographic motions of 2 N bodies plus small perpendicular non-harmonic oscillations. By introducing a parameter ɛ, the homographic motion and the small amplitude oscillations can be uncoupled into a purely Keplerian homographic motion of fixed period and a vertical oscillation described by a Hill type equation. Small changes in the eccentricity induce large variations in the period of the perpendicular oscillation and give rise, via a Bolzano argument, to resonant periodic solutions of the uncoupled system in a rotating frame. For small ɛ≠0, the topological transversality persists and Brouwer’s fixed point theorem shows the existence of this kind of solutions in the full system.

  5. On the classification of the spectrally stable standing waves of the Hartree problem

    NASA Astrophysics Data System (ADS)

    Georgiev, Vladimir; Stefanov, Atanas

    2018-05-01

    We consider the fractional Hartree model, with general power non-linearity and arbitrary spatial dimension. We construct variationally the "normalized" solutions for the corresponding Choquard-Pekar model-in particular a number of key properties, like smoothness and bell-shapedness are established. As a consequence of the construction, we show that these solitons are spectrally stable as solutions to the time-dependent Hartree model. In addition, we analyze the spectral stability of the Moroz-Van Schaftingen solitons of the classical Hartree problem, in any dimensions and power non-linearity. A full classification is obtained, the main conclusion of which is that only and exactly the "normalized" solutions (which exist only in a portion of the range) are spectrally stable.

  6. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  7. Fuel optimal maneuvers for spacecraft with fixed thrusters

    NASA Technical Reports Server (NTRS)

    Carter, T. C.

    1982-01-01

    Several mathematical models, including a minimum integral square criterion problem, were used for the qualitative investigation of fuel optimal maneuvers for spacecraft with fixed thrusters. The solutions consist of intervals of "full thrust" and "coast" indicating that thrusters do not need to be designed as "throttleable" for fuel optimal performance. For the primary model considered, singular solutions occur only if the optimal solution is "pure translation". "Time optimal" singular solutions can be found which consist of intervals of "coast" and "full thrust". The shape of the optimal fuel consumption curve as a function of flight time was found to depend on whether or not the initial state is in the region admitting singular solutions. Comparisons of fuel optimal maneuvers in deep space with those relative to a point in circular orbit indicate that qualitative differences in the solutions can occur. Computation of fuel consumption for certain "pure translation" cases indicates that considerable savings in fuel can result from the fuel optimal maneuvers.

  8. Improving Energy Efficiency in CNC Machining

    NASA Astrophysics Data System (ADS)

    Pavanaskar, Sushrut S.

    We present our work on analyzing and improving the energy efficiency of multi-axis CNC milling process. Due to the differences in energy consumption behavior, we treat 3- and 5-axis CNC machines separately in our work. For 3-axis CNC machines, we first propose an energy model that estimates the energy requirement for machining a component on a specified 3-axis CNC milling machine. Our model makes machine-specific predictions of energy requirements while also considering the geometric aspects of the machining toolpath. Our model - and the associated software tool - facilitate direct comparison of various alternative toolpath strategies based on their energy-consumption performance. Further, we identify key factors in toolpath planning that affect energy consumption in CNC machining. We then use this knowledge to propose and demonstrate a novel toolpath planning strategy that may be used to generate new toolpaths that are inherently energy-efficient, inspired by research on digital micrography -- a form of computational art. For 5-axis CNC machines, the process planning problem consists of several sub-problems that researchers have traditionally solved separately to obtain an approximate solution. After illustrating the need to solve all sub-problems simultaneously for a truly optimal solution, we propose a unified formulation based on configuration space theory. We apply our formulation to solve a problem variant that retains key characteristics of the full problem but has lower dimensionality, allowing visualization in 2D. Given the complexity of the full 5-axis toolpath planning problem, our unified formulation represents an important step towards obtaining a truly optimal solution. With this work on the two types of CNC machines, we demonstrate that without changing the current infrastructure or business practices, machine-specific, geometry-based, customized toolpath planning can save energy in CNC machining.

  9. The Czechoslovak legal regulation of family relations affected by development in medicine.

    PubMed

    Dragonec, J

    1990-01-01

    Medicine has developed rapidly during the last decades. Transplantation, sex-change surgery in transsexual or heterosexual persons, interference in the process of reproduction of human species and procedures like lobotomy have remarkably expanded the possibilities of contemporary medicine. This, at the same time gives rise to unprecedented legal problems. A number of them have not yet been solved in many countries, though legislative solutions are sought. The road to their solution, however, is full of blind curves: no sooner does the law offer an answer to one problem than medicine demands the answer to another, brand new one. This is why knowledge of these problems' regulation in different countries might be of use. That article gives an outline of their regulation in Czechoslovakia.

  10. Manpower Policy and Problems in Greece. Reviews of Manpower and Social Policies No. 3.

    ERIC Educational Resources Information Center

    Organisation for Economic Cooperation and Development, Paris (France).

    A full solution of the employment problems of countries in the stage of development now existing in Greece, to a great extent depends upon the possibilities of achieving the accumulation of capital necessary for the establishment of new industries and other investment. It is important for Greece to promote economic progress in the different…

  11. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  12. Measurement-device-independent quantum key distribution.

    PubMed

    Lo, Hoi-Kwong; Curty, Marcos; Qi, Bing

    2012-03-30

    How to remove detector side channel attacks has been a notoriously hard problem in quantum cryptography. Here, we propose a simple solution to this problem--measurement-device-independent quantum key distribution (QKD). It not only removes all detector side channels, but also doubles the secure distance with conventional lasers. Our proposal can be implemented with standard optical components with low detection efficiency and highly lossy channels. In contrast to the previous solution of full device independent QKD, the realization of our idea does not require detectors of near unity detection efficiency in combination with a qubit amplifier (based on teleportation) or a quantum nondemolition measurement of the number of photons in a pulse. Furthermore, its key generation rate is many orders of magnitude higher than that based on full device independent QKD. The results show that long-distance quantum cryptography over say 200 km will remain secure even with seriously flawed detectors.

  13. Band connectivity for topological quantum chemistry: Band structures as a graph theory problem

    NASA Astrophysics Data System (ADS)

    Bradlyn, Barry; Elcoro, L.; Vergniory, M. G.; Cano, Jennifer; Wang, Zhijun; Felser, C.; Aroyo, M. I.; Bernevig, B. Andrei

    2018-01-01

    The conventional theory of solids is well suited to describing band structures locally near isolated points in momentum space, but struggles to capture the full, global picture necessary for understanding topological phenomena. In part of a recent paper [B. Bradlyn et al., Nature (London) 547, 298 (2017), 10.1038/nature23268], we have introduced the way to overcome this difficulty by formulating the problem of sewing together many disconnected local k .p band structures across the Brillouin zone in terms of graph theory. In this paper, we give the details of our full theoretical construction. We show that crystal symmetries strongly constrain the allowed connectivities of energy bands, and we employ graph theoretic techniques such as graph connectivity to enumerate all the solutions to these constraints. The tools of graph theory allow us to identify disconnected groups of bands in these solutions, and so identify topologically distinct insulating phases.

  14. An approximate stationary solution for multi-allele neutral diffusion with low mutation rates.

    PubMed

    Burden, Conrad J; Tang, Yurong

    2016-12-01

    We address the problem of determining the stationary distribution of the multi-allelic, neutral-evolution Wright-Fisher model in the diffusion limit. A full solution to this problem for an arbitrary K×K mutation rate matrix involves solving for the stationary solution of a forward Kolmogorov equation over a (K-1)-dimensional simplex, and remains intractable. In most practical situations mutations rates are slow on the scale of the diffusion limit and the solution is heavily concentrated on the corners and edges of the simplex. In this paper we present a practical approximate solution for slow mutation rates in the form of a set of line densities along the edges of the simplex. The method of solution relies on parameterising the general non-reversible rate matrix as the sum of a reversible part and a set of (K-1)(K-2)/2 independent terms corresponding to fluxes of probability along closed paths around faces of the simplex. The solution is potentially a first step in estimating non-reversible evolutionary rate matrices from observed allele frequency spectra. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Full-Duplex Bidirectional Secure Communications Under Perfect and Distributionally Ambiguous Eavesdropper's CSI

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Zhang, Ying; Lin, Jingran; Wu, Sissi Xiaoxiao

    2017-09-01

    Consider a full-duplex (FD) bidirectional secure communication system, where two communication nodes, named Alice and Bob, simultaneously transmit and receive confidential information from each other, and an eavesdropper, named Eve, overhears the transmissions. Our goal is to maximize the sum secrecy rate (SSR) of the bidirectional transmissions by optimizing the transmit covariance matrices at Alice and Bob. To tackle this SSR maximization (SSRM) problem, we develop an alternating difference-of-concave (ADC) programming approach to alternately optimize the transmit covariance matrices at Alice and Bob. We show that the ADC iteration has a semi-closed-form beamforming solution, and is guaranteed to converge to a stationary solution of the SSRM problem. Besides the SSRM design, this paper also deals with a robust SSRM transmit design under a moment-based random channel state information (CSI) model, where only some roughly estimated first and second-order statistics of Eve's CSI are available, but the exact distribution or other high-order statistics is not known. This moment-based error model is new and different from the widely used bounded-sphere error model and the Gaussian random error model. Under the consider CSI error model, the robust SSRM is formulated as an outage probability-constrained SSRM problem. By leveraging the Lagrangian duality theory and DC programming, a tractable safe solution to the robust SSRM problem is derived. The effectiveness and the robustness of the proposed designs are demonstrated through simulations.

  16. An efficient flexible-order model for 3D nonlinear water waves

    NASA Astrophysics Data System (ADS)

    Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.

    2009-04-01

    The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.

  17. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.

  18. Solution of the Riemann problem for polarization waves in a two-component Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Ivanov, S. K.; Kamchatnov, A. M.; Congy, T.; Pavloff, N.

    2017-12-01

    We provide a classification of the possible flows of two-component Bose-Einstein condensates evolving from initially discontinuous profiles. We consider the situation where the dynamics can be reduced to the consideration of a single polarization mode (also denoted as "magnetic excitation") obeying a system of equations equivalent to the Landau-Lifshitz equation for an easy-plane ferromagnet. We present the full set of one-phase periodic solutions. The corresponding Whitham modulation equations are obtained together with formulas connecting their solutions with the Riemann invariants of the modulation equations. The problem is not genuinely nonlinear, and this results in a non-single-valued mapping of the solutions of the Whitham equations with physical wave patterns as well as the appearance of interesting elements—contact dispersive shock waves—that are absent in more standard, genuinely nonlinear situations. Our analytic results are confirmed by numerical simulations.

  19. Full-order optimal compensators for flow control: the multiple inputs case

    NASA Astrophysics Data System (ADS)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  20. Existence and numerical simulation of periodic traveling wave solutions to the Casimir equation for the Ito system

    NASA Astrophysics Data System (ADS)

    Abbasbandy, S.; Van Gorder, R. A.; Hajiketabi, M.; Mesrizadeh, M.

    2015-10-01

    We consider traveling wave solutions to the Casimir equation for the Ito system (a two-field extension of the KdV equation). These traveling waves are governed by a nonlinear initial value problem with an interesting nonlinearity (which actually amplifies in magnitude as the size of the solution becomes small). The nonlinear problem is parameterized by two initial constant values, and we demonstrate that the existence of solutions is strongly tied to these parameter values. For our interests, we are concerned with positive, bounded, periodic wave solutions. We are able to classify parameter regimes which admit such solutions in full generality, thereby obtaining a nice existence result. Using the existence result, we are then able to numerically simulate the positive, bounded, periodic solutions. We elect to employ a group preserving scheme in order to numerically study these solutions, and an outline of this approach is provided. The numerical simulations serve to illustrate the properties of these solutions predicted analytically through the existence result. Physically, these results demonstrate the existence of a type of space-periodic structure in the Casimir equation for the Ito model, which propagates as a traveling wave.

  1. High-speed reacting flow simulation using USA-series codes

    NASA Astrophysics Data System (ADS)

    Chakravarthy, S. R.; Palaniswamy, S.

    In this paper, the finite-rate chemistry (FRC) formulation for the USA-series of codes and three sets of validations are presented. USA-series computational fluid dynamics (CFD) codes are based on Unified Solution Algorithms including explicity and implicit formulations, factorization and relaxation approaches, time marching and space marching methodolgies, etc., in order to be able to solve a very wide class of CDF problems using a single framework. Euler or Navier-Stokes equations are solved using a finite-volume treatment with upwind Total Variation Diminishing discretization for the inviscid terms. Perfect and real gas options are available including equilibrium and nonequilibrium chemistry. This capability has been widely used to study various problems including Space Shuttle exhaust plumes, National Aerospace Plane (NASP) designs, etc. (1) Numerical solutions are presented showing the full range of possible solutions to steady detonation wave problems. (2) Comparison between the solution obtained by the USA code and Generalized Kinetics Analysis Program (GKAP) is shown for supersonic combustion in a duct. (3) Simulation of combustion in a supersonic shear layer is shown to have reasonable agreement with experimental observations.

  2. Monte Carlo exploration of Mikheyev-Smirnov-Wolfenstein solutions to the solar neutrino problem

    NASA Technical Reports Server (NTRS)

    Shi, X.; Schramm, D. N.; Bahcall, J. N.

    1992-01-01

    The paper explores the impact of astrophysical uncertainties on the Mikheyev-Smirnov-Wolfenstein (MSW) solution by calculating the allowed MSW solutions for 1000 different solar models with a Monte Carlo selection of solar model input parameters, assuming a full three-family MSW mixing. Applications are made to the chlorine, gallium, Kamiokande, and Borexino experiments. The initial GALLEX result limits the mixing parameters to the upper diagonal and the vertical regions of the MSW triangle. The expected event rates in the Borexino experiment are also calculated, assuming the MSW solutions implied by GALLEX.

  3. Howard University Bookstore

    ERIC Educational Resources Information Center

    Maxon, Hazel Carter; Negron, Jaime

    1977-01-01

    Two full-time university bookstores, with three satellites helping during rush period, serve the Howard students and faculty. Solutions to problems of space, acquiring used books, and communications with faculty members are discussed, and the successful retailing of black studies books is described. (LBH)

  4. Sub-problem Optimization With Regression and Neural Network Approximators

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Hopkins, Dale A.; Patnaik, Surya N.

    2003-01-01

    Design optimization of large systems can be attempted through a sub-problem strategy. In this strategy, the original problem is divided into a number of smaller problems that are clustered together to obtain a sequence of sub-problems. Solution to the large problem is attempted iteratively through repeated solutions to the modest sub-problems. This strategy is applicable to structures and to multidisciplinary systems. For structures, clustering the substructures generates the sequence of sub-problems. For a multidisciplinary system, individual disciplines, accounting for coupling, can be considered as sub-problems. A sub-problem, if required, can be further broken down to accommodate sub-disciplines. The sub-problem strategy is being implemented into the NASA design optimization test bed, referred to as "CometBoards." Neural network and regression approximators are employed for reanalysis and sensitivity analysis calculations at the sub-problem level. The strategy has been implemented in sequential as well as parallel computational environments. This strategy, which attempts to alleviate algorithmic and reanalysis deficiencies, has the potential to become a powerful design tool. However, several issues have to be addressed before its full potential can be harnessed. This paper illustrates the strategy and addresses some issues.

  5. Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints

    NASA Technical Reports Server (NTRS)

    Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren

    2015-01-01

    Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.

  6. Centralisation of Assessment: Meeting the Challenges of Multi-Year Team Projects in Information Systems Education

    ERIC Educational Resources Information Center

    Cooper, Grahame; Heinze, Aleksej

    2007-01-01

    This paper focuses on the difficulties of assessing multi-year team projects, in which a team of students drawn from all three years of a full-time degree course works on a problem with and for a real-life organization. Although potential solutions to the problem of assessing team projects may be context-dependent, we believe that discussing these…

  7. Dynamic Supersonic Base Store Ejection Simulation Using Beggar

    DTIC Science & Technology

    2008-12-01

    selected convergence tolerance. Beggar accomplishes this is by using the symmetric Gauss - Seidel relaxation scheme implemented as follows [26]: [ ln+1,m...scheme (Section 2.3.3). To compute a time accurate solution to an unsteady flow problem, Beggar ap- plies Newtons Method to Eq. 2.15. The full method ...3.6. Separation Distance (x/D) . . . . . . . . . . . . . . . . . . . . 46 4.1. Drag Coefficient of Static Solutions Compared to Dynamic Solu- tions

  8. Efficient Credit Assignment through Evaluation Function Decomposition

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian; Turner, Kagan; Mikkulainen, Risto

    2005-01-01

    Evolutionary methods are powerful tools in discovering solutions for difficult continuous tasks. When such a solution is encoded over multiple genes, a genetic algorithm faces the difficult credit assignment problem of evaluating how a single gene in a chromosome contributes to the full solution. Typically a single evaluation function is used for the entire chromosome, implicitly giving each gene in the chromosome the same evaluation. This method is inefficient because a gene will get credit for the contribution of all the other genes as well. Accurately measuring the fitness of individual genes in such a large search space requires many trials. This paper instead proposes turning this single complex search problem into a multi-agent search problem, where each agent has the simpler task of discovering a suitable gene. Gene-specific evaluation functions can then be created that have better theoretical properties than a single evaluation function over all genes. This method is tested in the difficult double-pole balancing problem, showing that agents using gene-specific evaluation functions can create a successful control policy in 20 percent fewer trials than the best existing genetic algorithms. The method is extended to more distributed problems, achieving 95 percent performance gains over tradition methods in the multi-rover domain.

  9. Galois groups of Schubert problems via homotopy computation

    NASA Astrophysics Data System (ADS)

    Leykin, Anton; Sottile, Frank

    2009-09-01

    Numerical homotopy continuation of solutions to polynomial equations is the foundation for numerical algebraic geometry, whose development has been driven by applications of mathematics. We use numerical homotopy continuation to investigate the problem in pure mathematics of determining Galois groups in the Schubert calculus. For example, we show by direct computation that the Galois group of the Schubert problem of 3-planes in mathbb{C}^8 meeting 15 fixed 5-planes non-trivially is the full symmetric group S_{6006} .

  10. Robust quantum optimizer with full connectivity.

    PubMed

    Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P

    2017-04-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.

  11. Resolvent approach for two-dimensional scattering problems. Application to the nonstationary Schrödinger problem and the KPI equation

    NASA Astrophysics Data System (ADS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.; Polivanov, M. C.

    1992-11-01

    The resolvent operator of the linear problem is determined as the full Green function continued in the complex domain in two variables. An analog of the known Hilbert identity is derived. We demonstrate the role of this identity in the study of two-dimensional scattering. Considering the nonstationary Schrödinger equation as an example, we show that all types of solutions of the linear problems, as well as spectral data known in the literature, are given as specific values of this unique function — the resolvent function. A new form of the inverse problem is formulated.

  12. Toward the automated analysis of plasma physics problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mynick, H.E.

    1989-04-01

    A program (CALC) is described, which carries out nontrivial plasma physics calculations, in a manner intended to emulate the approach of a human theorist. This includes the initial process of gathering the relevant equations from a plasma knowledge base, and then determining how to solve them. Solution of the sets of equations governing physics problems, which in general have a nonuniform,irregular structure, not amenable to solution by standardized algorithmic procedures, is facilitated by an analysis of the structure of the equations and the relations among them. This often permits decompositions of the full problem into subproblems, and other simplifications inmore » form, which renders the resultant subsystems soluble by more standardized tools. CALC's operation is illustrated by a detailed description of its treatment of a sample plasma calculation. 5 refs., 3 figs.« less

  13. Cyber Space--with Elbow Room.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    1998-01-01

    Describes how the Hammond School District (Indiana) solved the problem of fitting the correct amount of space needed for students, teachers, and technology. Examines the district's solutions for furniture needs through the use of full-scale mockups of classroom arrangements; and the wiring, power needs, and lighting. (GR)

  14. On the stability of the solutions of the general problem of three bodies

    NASA Technical Reports Server (NTRS)

    Standish, E. M., Jr.

    1976-01-01

    The extent through which the initial conditions of a given three-body system may be varied without completely changing the qualitative nature of the subsequent system evolution is investigated. It is assumed that the three masses are equal, all initial velocities are zero, the first two bodies initially lie on the x-axis, and the position of the third body is confined to a specific region of space. Analysis of the system evolution for different initial positions of the third body shows that there is a whole area or 'island' in the x-y plane throughout which the initial position of the third body may be moved in a continuous fashion to produce an evolution which also changes in a continuous manner. A Monte Carlo approach is adopted to determine the full extent of this island in the general problem. It is concluded that the stability of a full solution may be directly related to the size of its island in phase space.

  15. Refraction of dispersive shock waves

    NASA Astrophysics Data System (ADS)

    El, G. A.; Khodorovskii, V. V.; Leszczyszyn, A. M.

    2012-09-01

    We study a dispersive counterpart of the classical gas dynamics problem of the interaction of a shock wave with a counter-propagating simple rarefaction wave, often referred to as the shock wave refraction. The refraction of a one-dimensional dispersive shock wave (DSW) due to its head-on collision with the centred rarefaction wave (RW) is considered in the framework of the defocusing nonlinear Schrödinger (NLS) equation. For the integrable cubic nonlinearity case we present a full asymptotic description of the DSW refraction by constructing appropriate exact solutions of the Whitham modulation equations in Riemann invariants. For the NLS equation with saturable nonlinearity, whose modulation system does not possess Riemann invariants, we take advantage of the recently developed method for the DSW description in non-integrable dispersive systems to obtain main physical parameters of the DSW refraction. The key features of the DSW-RW interaction predicted by our modulation theory analysis are confirmed by direct numerical solutions of the full dispersive problem.

  16. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Gottwald, James A.; Bliss, Donald B.

    1990-01-01

    The focus is on a noise control method which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. An interior noise reduction called alternate resonance tuning (ART) is described both theoretically and experimentally. Problems dealing with tuning single paneled wall structures for optimum noise reduction using the ART methodology are presented, and three theoretical problems are analyzed. The first analysis is a three dimensional, full acoustic solution for tuning a panel wall composed of repeating sections with four different panel tunings within that section, where the panels are modeled as idealized spring-mass-damper systems. The second analysis is a two dimensional, full acoustic solution for a panel geometry influenced by the effect of a propagating external pressure field such as that which might be associated with propeller passage by a fuselage. To reduce the analysis complexity, idealized spring-mass-damper panels are again employed. The final theoretical analysis presents the general four panel problem with real panel sections, where the effect of higher structural modes is discussed. Results from an experimental program highlight real applications of the ART concept and show the effectiveness of the tuning on real structures.

  17. Full three-body problem in effective-field-theory models of gravity

    NASA Astrophysics Data System (ADS)

    Battista, Emmanuele; Esposito, Giampiero

    2014-10-01

    Recent work in the literature has studied the restricted three-body problem within the framework of effective-field-theory models of gravity. This paper extends such a program by considering the full three-body problem, when the Newtonian potential is replaced by a more general central potential which depends on the mutual separations of the three bodies. The general form of the equations of motion is written down, and they are studied when the interaction potential reduces to the quantum-corrected central potential considered recently in the literature. A recursive algorithm is found for solving the associated variational equations, which describe small departures from given periodic solutions of the equations of motion. Our scheme involves repeated application of a 2×2 matrix of first-order linear differential operators.

  18. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  19. Stratified Shear Flows In Pipe Geometries

    NASA Astrophysics Data System (ADS)

    Harabin, George; Camassa, Roberto; McLaughlin, Richard; UNC Joint Fluids Lab Team Team

    2015-11-01

    Exact and series solutions to the full Navier-Stokes equations coupled to the advection diffusion equation are investigated in tilted three-dimensional pipe geometries. Analytic techniques for studying the three-dimensional problem provide a means for tackling interesting questions such as the optimal domain for mass transport, and provide new avenues for experimental investigation of diffusion driven flows. Both static and time dependent solutions will be discussed. NSF RTG DMS-0943851, NSF RTG ARC-1025523, NSF DMS-1009750.

  20. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  1. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  2. 34 CFR 200.40 - Technical assistance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED Improving Basic... full compliance with all of the reporting provisions of Title II of the Higher Education Act of 1965... system, and other examples of student work, to identify and develop solutions to problems in— (i...

  3. 34 CFR 200.40 - Technical assistance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED Improving Basic... full compliance with all of the reporting provisions of Title II of the Higher Education Act of 1965... system, and other examples of student work, to identify and develop solutions to problems in— (i...

  4. 34 CFR 200.40 - Technical assistance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED Improving Basic... full compliance with all of the reporting provisions of Title II of the Higher Education Act of 1965... system, and other examples of student work, to identify and develop solutions to problems in— (i...

  5. 34 CFR 200.40 - Technical assistance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED Improving Basic... full compliance with all of the reporting provisions of Title II of the Higher Education Act of 1965... system, and other examples of student work, to identify and develop solutions to problems in— (i...

  6. 34 CFR 200.40 - Technical assistance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED Improving Basic... full compliance with all of the reporting provisions of Title II of the Higher Education Act of 1965... system, and other examples of student work, to identify and develop solutions to problems in— (i...

  7. Job-Sharing at the Greater Victoria Public Library.

    ERIC Educational Resources Information Center

    Miller, Don

    1978-01-01

    Describes the problems associated with the management of part-time library employees and some solutions afforded by a job sharing arrangement in use at the Greater Victoria Public Library. This is a voluntary work arrangement, changing formerly full-time positions into multiple part-time positions. (JVP)

  8. Exact solutions of massive gravity in three dimensions

    NASA Astrophysics Data System (ADS)

    Chakhad, Mohamed

    In recent years, there has been an upsurge in interest in three-dimensional theories of gravity. In particular, two theories of massive gravity in three dimensions hold strong promise in the search for fully consistent theories of quantum gravity, an understanding of which will shed light on the problems of quantum gravity in four dimensions. One of these theories is the "old" third-order theory of topologically massive gravity (TMG) and the other one is a "new" fourth-order theory of massive gravity (NMG). Despite this increase in research activity, the problem of finding and classifying solutions of TMG and NMG remains a wide open area of research. In this thesis, we provide explicit new solutions of massive gravity in three dimensions and suggest future directions of research. These solutions belong to the Kundt class of spacetimes. A systematic analysis of the Kundt solutions with constant scalar polynomial curvature invariants provides a glimpse of the structure of the spaces of solutions of the two theories of massive gravity. We also find explicit solutions of topologically massive gravity whose scalar polynomial curvature invariants are not all constant, and these are the first such solutions. A number of properties of Kundt solutions of TMG and NMG, such as an identification of solutions which lie at the intersection of the full nonlinear and linearized theories, are also derived.

  9. A dimensionally split Cartesian cut cell method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Gokhale, Nandan; Nikiforakis, Nikos; Klein, Rupert

    2018-07-01

    We present a dimensionally split method for solving hyperbolic conservation laws on Cartesian cut cell meshes. The approach combines local geometric and wave speed information to determine a novel stabilised cut cell flux, and we provide a full description of its three-dimensional implementation in the dimensionally split framework of Klein et al. [1]. The convergence and stability of the method are proved for the one-dimensional linear advection equation, while its multi-dimensional numerical performance is investigated through the computation of solutions to a number of test problems for the linear advection and Euler equations. When compared to the cut cell flux of Klein et al., it was found that the new flux alleviates the problem of oscillatory boundary solutions produced by the former at higher Courant numbers, and also enables the computation of more accurate solutions near stagnation points. Being dimensionally split, the method is simple to implement and extends readily to multiple dimensions.

  10. Method for improving accuracy in full evaporation headspace analysis.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-05-01

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  12. A stochastic algorithm for global optimization and for best populations: A test case of side chains in proteins

    PubMed Central

    Glick, Meir; Rayan, Anwar; Goldblum, Amiram

    2002-01-01

    The problem of global optimization is pivotal in a variety of scientific fields. Here, we present a robust stochastic search method that is able to find the global minimum for a given cost function, as well as, in most cases, any number of best solutions for very large combinatorial “explosive” systems. The algorithm iteratively eliminates variable values that contribute consistently to the highest end of a cost function's spectrum of values for the full system. Values that have not been eliminated are retained for a full, exhaustive search, allowing the creation of an ordered population of best solutions, which includes the global minimum. We demonstrate the ability of the algorithm to explore the conformational space of side chains in eight proteins, with 54 to 263 residues, to reproduce a population of their low energy conformations. The 1,000 lowest energy solutions are identical in the stochastic (with two different seed numbers) and full, exhaustive searches for six of eight proteins. The others retain the lowest 141 and 213 (of 1,000) conformations, depending on the seed number, and the maximal difference between stochastic and exhaustive is only about 0.15 Kcal/mol. The energy gap between the lowest and highest of the 1,000 low-energy conformers in eight proteins is between 0.55 and 3.64 Kcal/mol. This algorithm offers real opportunities for solving problems of high complexity in structural biology and in other fields of science and technology. PMID:11792838

  13. Robust quantum optimizer with full connectivity

    PubMed Central

    Nigg, Simon E.; Lörch, Niels; Tiwari, Rakesh P.

    2017-01-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation. PMID:28435880

  14. Systematic investigation of non-Boussinesq effects in variable-density groundwater flow simulations.

    PubMed

    Guevara Morel, Carlos R; van Reeuwijk, Maarten; Graf, Thomas

    2015-12-01

    The validity of three mathematical models describing variable-density groundwater flow is systematically evaluated: (i) a model which invokes the Oberbeck-Boussinesq approximation (OB approximation), (ii) a model of intermediate complexity (NOB1) and (iii) a model which solves the full set of equations (NOB2). The NOB1 and NOB2 descriptions have been added to the HydroGeoSphere (HGS) model, which originally contained an implementation of the OB description. We define the Boussinesq parameter ερ=βω Δω where βω is the solutal expansivity and Δω is the characteristic difference in solute mass fraction. The Boussinesq parameter ερ is used to systematically investigate three flow scenarios covering a range of free and mixed convection problems: 1) the low Rayleigh number Elder problem (Van Reeuwijk et al., 2009), 2) a convective fingering problem (Xie et al., 2011) and 3) a mixed convective problem (Schincariol et al., 1994). Results indicate that small density differences (ερ≤ 0.05) produce no apparent changes in the total solute mass in the system, plume penetration depth, center of mass and mass flux independent of the mathematical model used. Deviations between OB, NOB1 and NOB2 occur for large density differences (ερ>0.12), where lower description levels will underestimate the vertical plume position and overestimate mass flux. Based on the cases considered here, we suggest the following guidelines for saline convection: the OB approximation is valid for cases with ερ<0.05, and the full NOB set of equations needs to be used for cases with ερ>0.10. Whether NOB effects are important in the intermediate region differ from case to case. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. A parallel algorithm for nonlinear convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1990-01-01

    A parallel algorithm for the efficient solution of nonlinear time-dependent convection-diffusion equations with small parameter on the diffusion term is presented. The method is based on a physically motivated domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. The method is suitable for the solution of problems arising in the simulation of fluid dynamics. Experimental results for a nonlinear equation in two-dimensions are presented.

  16. The Design and Evaluation of a High Performance Smalltalk System

    DTIC Science & Technology

    1986-02-01

    the problems that Smaltalk-80 prsents and the solutions in SOAR’s architecture. The effectiveness of each solution is represented by de time cost of...chips ran small progriams and reclaimed storage. Fibonacci (20) took 100 mil lion cycles (0 1600 as) with a 64KW memory that was half-full. Over two... de oamly Kamm performance . 5gw. asalsbie. + 12. with a bmor cofpiur. * 510 . as tw meard cycle Uim of walking NMOS SOAR chtps. iludm" 110 s for dhe

  17. Monte Carlo simulation of a near-continuum shock-shock interaction problem

    NASA Technical Reports Server (NTRS)

    Carlson, Ann B.; Wilmoth, Richard G.

    1992-01-01

    A complex shock interaction is calculated with direct simulation Monte Carlo (DSMC). The calculation is performed for the near-continuum flow produced when an incident shock impinges on the bow shock of a 0.1 in. radius cowl lip for freestream conditions of approximately Mach 15 and 35 km altitude. Solutions are presented both for a full finite-rate chemistry calculation and for a case with chemical reactions suppressed. In each case, both the undisturbed flow about the cowl lip and the full shock interaction flowfields are calculated. Good agreement has been obtained between the no-chemistry simulation of the undisturbed flow and a perfect gas solution obtained with the viscous shock-layer method. Large differences in calculated surface properties when different chemical models are used demonstrate the necessity of adequately representing the chemistry when making surface property predictions. Preliminary grid refinement studies make it possible to estimate the accuracy of the solutions.

  18. Improving Provisions for Organization, Housing, Financial Support and Accountability.

    ERIC Educational Resources Information Center

    Polley, John W.; Lamitie, Robert E.

    This chapter provides insights into the solution of financial and governance problems that face big city education. The report identifies recent developments affecting big city education such as metropolitanism, regionalism, full State financing, revenue sharing, and reform of property taxation. The authors discuss (1) recent court cases affecting…

  19. Motivating Learners at South Korean Universities

    ERIC Educational Resources Information Center

    Niederhauser, Janet S.

    2012-01-01

    Students at many universities often fail to reach their full potential as English language learners due to low motivation. Some of the factors that affect their motivation relate to the country's education system in general. Others reflect institutional and cultural views of language learning in particular. Using a problem-solution format, this…

  20. Linguistic Features of Middle School Environmental Education Texts.

    ERIC Educational Resources Information Center

    Chenhansa, Suporn; Schleppegrell, Mary

    1998-01-01

    The language used in environmental education texts has linguistic features that affect students' comprehension of concepts and their ability to envision solutions to environmental problems. Findings indicate that features of texts such as abstract nouns and lack of explicit agents impede students' full comprehension of complex issues and obscure…

  1. Comparison of iterative inverse coarse-graining methods

    NASA Astrophysics Data System (ADS)

    Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.

    2016-10-01

    Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.

  2. Finite element solution of optimal control problems with inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1990-01-01

    A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.

  3. Tag SNP selection via a genetic algorithm.

    PubMed

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  4. [Kinetic theory and boundary conditions for highly inelastic spheres]. Quarterly progress report, April 1, 1993--June 30, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richman, M.

    1993-12-31

    In this quarter, a kinetic theory was employed to set up the boundary value problem for steady, fully developed, gravity-driven flows of identical, smooth, highly inelastic spheres down bumpy inclines. The solid fraction, mean velocity, and components of the full second moment of fluctuation velocity were treated as mean fields. In addition to the balance equations for mass and momentum, the balance of the full second moment of fluctuation velocity was treated as an equation that must be satisfied by the mean fields. However, in order to simplify the resulting boundary value problem, fluxes of second moments in its isotropicmore » piece only were retained. The constitutive relations for the stresses and collisional source of second moment depend explicitly on the second moment of fluctuation velocity, and the constitutive relation for the energy flux depends on gradients of granular temperature, solid fraction, and components of the second moment. The boundary conditions require that the flows are free of stress and energy flux at their tops, and that momentum and energy are balanced at the bumpy base. The details of the boundary value problem are provided. In the next quarter, a solution procedure will be developed, and it will be employed to obtain sample numerical solutions to the boundary value problem described here.« less

  5. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.

    PubMed

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-06-06

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  6. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    PubMed Central

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  7. New control strategies for longwall armored face conveyors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broadfoot, A.R.; Betz, R.E.

    1998-03-01

    This paper investigates a new control approach for longwall armored face conveyors (AFC`s) using variable-speed drives (VSD`s). Traditionally, AFC`s have used fixed-speed or two-speed motors, with various mechanical solutions employed to try to solve the problems that this causes. The VSD approach to the control problem promises to solve all the significant problems associated with the control of AFC`s. This paper will present the control algorithms developed for a VSD-based AFC drive system and demonstrate potential performance via computer simulation. A full discussion of the problems involved with the control of AFC`s can be found in the companion paper.

  8. Contextual approach to technology assessment: Implications for one-factor fix solutions to complex social problems

    NASA Technical Reports Server (NTRS)

    Mayo, L. H.

    1975-01-01

    The contextual approach is discussed which undertakes to demonstrate that technology assessment assists in the identification of the full range of implications of taking a particular action and facilitates the consideration of alternative means by which the total affected social problem context might be changed by available project options. It is found that the social impacts of an application on participants, institutions, processes, and social interests, and the accompanying interactions may not only induce modifications in the problem contest delineated for examination with respect to the design, operations, regulation, and use of the posited application, but also affect related social problem contexts.

  9. Binning and filtering: the six-color solution

    NASA Astrophysics Data System (ADS)

    Ashdown, Ian; Robinson, Shane; Salsbury, Marc

    2006-08-01

    The use of LED backlighting for LCD displays requires careful binning of red, green, and blue LEDs by dominant wavelength to maintain the color gamuts as specified by NTSC, SMPTE, and EBU/ITU standards. This problem also occurs to a lesser extent with RGB and RGBA assemblies for solid-state lighting, where color gamut consistency is required for color-changing luminaires. In this paper, we propose a "six-color solution," based on Grassman's laws, that does not require color binning, but nevertheless guarantees a fixed color gamut that subsumes the color gamuts of carefully-binned RGB assemblies. A further advantage of this solution is that it solves the problem of peak wavelength shifts with varying junction temperatures. The color gamut can thus remain fixed over the full range of LED intensities and ambient temperatures. A related problem occurs with integrated circuit (IC) colorimeters used for optical feedback with LED backlighting and RGB(A) solid-state lighting, wherein it can be difficult to distinguish between peak wavelength shifts and changes in LED intensity. We apply our six-color solution to the design of a novel colorimeter for LEDs that independently measures changes in peak wavelength and intensity. The design is compatible with current manufacturing techniques for tristimulus colorimeter ICs. Together, the six-color solution for LEDs and colorimeters enables less expensive LED backlighting and solid-state lighting systems with improved color stability.

  10. The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.

    NASA Astrophysics Data System (ADS)

    Loridan, V.; Ripoll, J. F.; De Vuyst, F.

    2017-12-01

    Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.

  11. The origin of spurious solutions in computational electromagnetics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Wu, Jie; Povinelli, L. A.

    1995-01-01

    The origin of spurious solutions in computational electromagnetics, which violate the divergence equations, is deeply rooted in a misconception about the first-order Maxwell's equations and in an incorrect derivation and use of the curl-curl equations. The divergence equations must be always included in the first-order Maxwell's equations to maintain the ellipticity of the system in the space domain and to guarantee the uniqueness of the solution and/or the accuracy of the numerical solutions. The div-curl method and the least-squares method provide rigorous derivation of the equivalent second-order Maxwell's equations and their boundary conditions. The node-based least-squares finite element method (LSFEM) is recommended for solving the first-order full Maxwell equations directly. Examples of the numerical solutions by LSFEM for time-harmonic problems are given to demonstrate that the LSFEM is free of spurious solutions.

  12. LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints

    NASA Technical Reports Server (NTRS)

    Swei, Sean S.M.; Ayoubi, Mohammad A.

    2017-01-01

    This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.

  13. Application of firefly algorithm to the dynamic model updating problem

    NASA Astrophysics Data System (ADS)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  14. Fluid displacement between two parallel plates: a non-empirical model displaying change of type from hyperbolic to elliptic equations

    NASA Astrophysics Data System (ADS)

    Shariati, M.; Talon, L.; Martin, J.; Rakotomalala, N.; Salin, D.; Yortsos, Y. C.

    2004-11-01

    We consider miscible displacement between parallel plates in the absence of diffusion, with a concentration-dependent viscosity. By selecting a piecewise viscosity function, this can also be considered as ‘three-fluid’ flow in the same geometry. Assuming symmetry across the gap and based on the lubrication (‘equilibrium’) approximation, a description in terms of two quasi-linear hyperbolic equations is obtained. We find that the system is hyperbolic and can be solved analytically, when the mobility profile is monotonic, or when the mobility of the middle phase is smaller than its neighbours. When the mobility of the middle phase is larger, a change of type is displayed, an elliptic region developing in the composition space. Numerical solutions of Riemann problems of the hyperbolic system spanning the elliptic region, with small diffusion added, show good agreement with the analytical outside, but an unstable behaviour inside the elliptic region. In these problems, the elliptic region arises precisely at the displacement front. Crossing the elliptic region requires the solution of essentially an eigenvalue problem of the full higher-dimensional model, obtained here using lattice BGK simulations. The hyperbolic-to-elliptic change-of-type reflects the failing of the lubrication approximation, underlying the quasi-linear hyperbolic formalism, to describe the problem uniformly. The obtained solution is analogous to non-classical shocks recently suggested in problems with change of type.

  15. Simplified Predictive Models for CO 2 Sequestration Performance Assessment: Research Topical Report on Task #4 - Reduced-Order Method (ROM) Based Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Jin, Larry; He, Jincong

    2015-06-30

    Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO 2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the applicationmore » of POD-TPWL for CO 2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO 2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix.« less

  16. ICASE Semiannual Report 1 October 1991 - 31 March 1992

    DTIC Science & Technology

    1992-05-01

    who have resident appointments for limited periods of time as well as by visiting and resident consultants. Members of NASA’s research staff may also...performed showing that the full optimization problem can be solved with a computational cost which is only a few times more than that of solving the PDE...The goal is to obtain a solution of the optimization problem in a computational cost which is just a few times (2-3) that of the flow solver. Such a

  17. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  18. On the Unreasonable Effectiveness of post-Newtonian Theory in Gravitational-Wave Physics

    ScienceCinema

    Will, Clifford M.

    2017-12-22

    The first indirect detection of gravitational waves involved a binary system of neutron stars.  In the future, the first direct detection may also involve binary systems -- inspiralling and merging binary neutron stars or black holes. This means that it is essential to understand in full detail the two-body system in general relativity, a notoriously difficult problem with a long history. Post-Newtonian approximation methods are thought to work only under slow motion and weak field conditions, while numerical solutions of Einstein's equations are thought to be limited to the final merger phase.  Recent results have shown that post-Newtonian approximations seem to remain unreasonably valid well into the relativistic regime, while advances in numerical relativity now permit solutions for numerous orbits before merger.  It is now possible to envision linking post-Newtonian theory and numerical relativity to obtain a complete "solution" of the general relativistic two-body problem.  These solutions will play a central role in detecting and understanding gravitational wave signals received by interferometric observatories on Earth and in space.

  19. Primal Barrier Methods for Linear Programming

    DTIC Science & Technology

    1989-06-01

    A Theoretical Bound Concerning the difficulties introduced by an ill-conditioned H- 1, Dikin [Dik67] and Stewart [Stew87] show for a full-rank A...Dik67] I. I. Dikin (1967). Iterative solution of problems of linear and quadratic pro- gramming, Doklady Akademii Nauk SSSR, Tom 174, No. 4. [Fia79] A. V

  20. Development of a pressure based multigrid solution method for complex fluid flows

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1991-01-01

    In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.

  1. Efficient calculation of full waveform time domain inversion for electromagnetic problem using fictitious wave domain method and cascade decimation decomposition

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2016-12-01

    Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.

  2. MPACT Standard Input User s Manual, Version 2.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew

    The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.

  3. GRACE RL03-v2 monthly time series of solutions from CNES/GRGS

    NASA Astrophysics Data System (ADS)

    Lemoine, Jean-Michel; Bourgogne, Stéphane; Bruinsma, Sean; Gégout, Pascal; Reinquin, Franck; Biancale, Richard

    2015-04-01

    Based on GRACE GPS and KBR Level-1B.v2 data, as well as on LAGEOS-1/2 SLR data, CNES/GRGS has published in 2014 the third full re-iteration of its GRACE gravity field solutions. This monthly time series of solutions, named RL03-v1, complete to spherical harmonics degree/order 80, has displayed interesting performances in terms of spatial resolution and signal amplitude compared to JPL/GFZ/CSR RL05. This is due to a careful selection of the background models (FES2014 ocean tides, ECMWF ERA-interim (atmosphere) and TUGO (non IB-ocean) "dealiasing" models every 3 hours) and to the choice of an original method for gravity field inversion : truncated SVD. Identically to the previous CNES/GRGS releases, no additional filtering of the solutions is necessary before using them. Some problems have however been identified in CNES/GRGS RL03-v1: - an erroneous mass signal located in two small circular rings close to the Earth's poles, leading to the recommendation not to use RL03-v1 above 82° latitudes North and South; - a weakness in the sectorials due to an excessive downweighting of the GRACE GPS observations. These two problems have been understood and addressed, leading to the computation of a corrected time series of solutions, RL03-v2. The corrective steps have been: - to strengthen the determination of the very low degrees by adding Starlette and Stella SLR data to the normal equations; - to increase the weight of the GRACE GPS observations; - to adopt a two steps approach for the computation of the solutions: first a Choleski inversion for the low degrees, followed by a truncated SVD solution. The identification of these problems will be discussed and the performance of the new time series evaluated.

  4. Hybrid asymptotic-numerical approach for estimating first-passage-time densities of the two-dimensional narrow capture problem.

    PubMed

    Lindsay, A E; Spoonmore, R T; Tzou, J C

    2016-10-01

    A hybrid asymptotic-numerical method is presented for obtaining an asymptotic estimate for the full probability distribution of capture times of a random walker by multiple small traps located inside a bounded two-dimensional domain with a reflecting boundary. As motivation for this study, we calculate the variance in the capture time of a random walker by a single interior trap and determine this quantity to be comparable in magnitude to the mean. This implies that the mean is not necessarily reflective of typical capture times and that the full density must be determined. To solve the underlying diffusion equation, the method of Laplace transforms is used to obtain an elliptic problem of modified Helmholtz type. In the limit of vanishing trap sizes, each trap is represented as a Dirac point source that permits the solution of the transform equation to be represented as a superposition of Helmholtz Green's functions. Using this solution, we construct asymptotic short-time solutions of the first-passage-time density, which captures peaks associated with rapid capture by the absorbing traps. When numerical evaluation of the Helmholtz Green's function is employed followed by numerical inversion of the Laplace transform, the method reproduces the density for larger times. We demonstrate the accuracy of our solution technique with a comparison to statistics obtained from a time-dependent solution of the diffusion equation and discrete particle simulations. In particular, we demonstrate that the method is capable of capturing the multimodal behavior in the capture time density that arises when the traps are strategically arranged. The hybrid method presented can be applied to scenarios involving both arbitrary domains and trap shapes.

  5. Closed, analytic, boson realizations for Sp(4)

    NASA Astrophysics Data System (ADS)

    Klein, Abraham; Zhang, Qing-Ying

    1986-08-01

    The problem of determing a boson realization for an arbitrary irrep of the unitary simplectic algebra Sp(2d) [or of the corresponding discrete unitary irreps of the unbounded algebra Sp(2d,R)] has been solved completely in recent papers by Deenen and Quesne [J. Deenen and C. Quesne, J. Math. Phys. 23, 878, 2004 (1982); 25, 1638 (1984); 26, 2705 (1985)] and by Moshinsky and co-workers [O. Castaños, E. Chacón, M. Moshinsky, and C. Quesne, J. Math. Phys. 26, 2107 (1985); M. Moshinsky, ``Boson realization of symplectic algebras,'' to be published]. This solution is not known in closed analytic form except for d=1 and for special classes of irreps for d>1. A different method of obtaining a boson realization that solves the full problem for Sp(4) is described. The method utilizes the chain Sp(2d)⊇SU(2)×SU(2) ×ṡṡṡ×SU(2) (d times), which, for d≥4, does not provide a complete set of quantum numbers. Though a simple solution of the missing label problem can be given, this solution does not help in the construction of a mapping algorithm for general d.

  6. Global convergence of inexact Newton methods for transonic flow

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1990-01-01

    In computational fluid dynamics, nonlinear differential equations are essential to represent important effects such as shock waves in transonic flow. Discretized versions of these nonlinear equations are solved using iterative methods. In this paper an inexact Newton method using the GMRES algorithm of Saad and Schultz is examined in the context of the full potential equation of aerodynamics. In this setting, reliable and efficient convergence of Newton methods is difficult to achieve. A poor initial solution guess often leads to divergence or very slow convergence. This paper examines several possible solutions to these problems, including a standard local damping strategy for Newton's method and two continuation methods, one of which utilizes interpolation from a coarse grid solution to obtain the initial guess on a finer grid. It is shown that the continuation methods can be used to augment the local damping strategy to achieve convergence for difficult transonic flow problems. These include simple wings with shock waves as well as problems involving engine power effects. These latter cases are modeled using the assumption that each exhaust plume is isentropic but has a different total pressure and/or temperature than the freestream.

  7. Local fields and effective conductivity tensor of ellipsoidal particle composite with anisotropic constituents

    NASA Astrophysics Data System (ADS)

    Kushch, Volodymyr I.; Sevostianov, Igor; Giraud, Albert

    2017-11-01

    An accurate semi-analytical solution of the conductivity problem for a composite with anisotropic matrix and arbitrarily oriented anisotropic ellipsoidal inhomogeneities has been obtained. The developed approach combines the superposition principle with the multipole expansion of perturbation fields of inhomogeneities in terms of ellipsoidal harmonics and reduces the boundary value problem to an infinite system of linear algebraic equations for the induced multipole moments of inhomogeneities. A complete full-field solution is obtained for the multi-particle models comprising inhomogeneities of diverse shape, size, orientation and properties which enables an adequate account for the microstructure parameters. The solution is valid for the general-type anisotropy of constituents and arbitrary orientation of the orthotropy axes. The effective conductivity tensor of the particulate composite with anisotropic constituents is evaluated in the framework of the generalized Maxwell homogenization scheme. Application of the developed method to composites with imperfect ellipsoidal interfaces is straightforward. Their incorporation yields probably the most general model of a composite that may be considered in the framework of analytical approach.

  8. Numerical solution of the quantum Lenard-Balescu equation for a non-degenerate one-component plasma

    DOE PAGES

    Scullard, Christian R.; Belt, Andrew P.; Fennell, Susan C.; ...

    2016-09-01

    We present a numerical solution of the quantum Lenard-Balescu equation using a spectral method, namely an expansion in Laguerre polynomials. This method exactly conserves both particles and kinetic energy and facilitates the integration over the dielectric function. To demonstrate the method, we solve the equilibration problem for a spatially homogeneous one-component plasma with various initial conditions. Unlike the more usual Landau/Fokker-Planck system, this method requires no input Coulomb logarithm; the logarithmic terms in the collision integral arise naturally from the equation along with the non-logarithmic order-unity terms. The spectral method can also be used to solve the Landau equation andmore » a quantum version of the Landau equation in which the integration over the wavenumber requires only a lower cutoff. We solve these problems as well and compare them with the full Lenard-Balescu solution in the weak-coupling limit. Finally, we discuss the possible generalization of this method to include spatial inhomogeneity and velocity anisotropy.« less

  9. A reduced-order model from high-dimensional frictional hysteresis

    PubMed Central

    Biswas, Saurabh; Chatterjee, Anindya

    2014-01-01

    Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522

  10. Approximate solutions of acoustic 3D integral equation and their application to seismic modeling and full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2017-10-01

    Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.

  11. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem

    PubMed Central

    Yu, Yi; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm’s performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms. PMID:29324743

  12. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem.

    PubMed

    Yu, Yi; Wu, Yonggang; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm's performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms.

  13. Flux-vector splitting algorithm for chain-rule conservation-law form

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Nguyen, H. L.; Willis, E. A.; Steinthorsson, E.; Li, Z.

    1991-01-01

    A flux-vector splitting algorithm with Newton-Raphson iteration was developed for the 'full compressible' Navier-Stokes equations cast in chain-rule conservation-law form. The algorithm is intended for problems with deforming spatial domains and for problems whose governing equations cannot be cast in strong conservation-law form. The usefulness of the algorithm for such problems was demonstrated by applying it to analyze the unsteady, two- and three-dimensional flows inside one combustion chamber of a Wankel engine under nonfiring conditions. Solutions were obtained to examine the algorithm in terms of conservation error, robustness, and ability to handle complex flows on time-dependent grid systems.

  14. Predictive Simulations of Neuromuscular Coordination and Joint-Contact Loading in Human Gait.

    PubMed

    Lin, Yi-Chung; Walter, Jonathan P; Pandy, Marcus G

    2018-04-18

    We implemented direct collocation on a full-body neuromusculoskeletal model to calculate muscle forces, ground reaction forces and knee contact loading simultaneously for one cycle of human gait. A data-tracking collocation problem was solved for walking at the normal speed to establish the practicality of incorporating a 3D model of articular contact and a model of foot-ground interaction explicitly in a dynamic optimization simulation. The data-tracking solution then was used as an initial guess to solve predictive collocation problems, where novel patterns of movement were generated for walking at slow and fast speeds, independent of experimental data. The data-tracking solutions accurately reproduced joint motion, ground forces and knee contact loads measured for two total knee arthroplasty patients walking at their preferred speeds. RMS errors in joint kinematics were < 2.0° for rotations and < 0.3 cm for translations while errors in the model-computed ground-reaction and knee-contact forces were < 0.07 BW and < 0.4 BW, respectively. The predictive solutions were also consistent with joint kinematics, ground forces, knee contact loads and muscle activation patterns measured for slow and fast walking. The results demonstrate the feasibility of performing computationally-efficient, predictive, dynamic optimization simulations of movement using full-body, muscle-actuated models with realistic representations of joint function.

  15. Full cycle rapid scan EPR deconvolution algorithm.

    PubMed

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Full cycle rapid scan EPR deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan.

  17. Beamforming Based Full-Duplex for Millimeter-Wave Communication

    PubMed Central

    Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen

    2016-01-01

    In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256

  18. Unitarity problems in 3D gravity theories

    NASA Astrophysics Data System (ADS)

    Alkac, Gokhan; Basanisi, Luca; Kilicarslan, Ercan; Tekin, Bayram

    2017-07-01

    We revisit the problem of the bulk-boundary unitarity clash in 2 +1 -dimensional gravity theories, which has been an obstacle in providing a viable dual two-dimensional conformal field theory for bulk gravity in anti-de Sitter (AdS) spacetime. Chiral gravity, which is a particular limit of cosmological topologically massive gravity (TMG), suffers from perturbative log-modes with negative energies inducing a nonunitary logarithmic boundary field theory. We show here that any f (R ) extension of TMG does not improve the situation. We also study the perturbative modes in the metric formulation of minimal massive gravity—originally constructed in a first-order formulation—and find that the massive mode has again negative energy except in the chiral limit. We comment on this issue and also discuss a possible solution to the problem of negative-energy modes. In any of these theories, the infinitesimal dangerous deformations might not be integrable to full solutions; this suggests a linearization instability of AdS spacetime in the direction of the perturbative log-modes.

  19. A Model for Displacements Between Parallel Plates That Shows Change of Type from Hyperbolic to Elliptic

    NASA Astrophysics Data System (ADS)

    Shariati, Maryam; Yortsos, Yannis; Talon, Laurent; Martin, Jerome; Rakotomalala, Nicole; Salin, Dominique

    2003-11-01

    We consider miscible displacement between parallel plates, where the viscosity is a function of the concentration. By selecting a piece-wise representation, the problem can be considered as ``three-phase'' flow. Assuming a lubrication-type approximation, the mathematical description is in terms of two quasi-linear hyperbolic equations. When the mobility of the middle phase is smaller than its neighbors, the system is genuinely hyperbolic and can be solved analytically. However, when it is larger, an elliptic region develops. This change-of-type behavior is for the first time proved here based on sound physical principles. Numerical solutions with a small diffusion are presented. Good agreement is obtained outside the elliptic region, but not inside, where the numerical results show unstable behavior. We conjecture that for the solution of the real problem in the mixed-type case, the full higher-dimensionality problem must be considered inside the elliptic region, in which the lubrication (parallel-flow) approximation is no longer appropriate. This is discussed in a companion presentation.

  20. Discontinuous Galerkin finite element methods for radiative transfer in spherical symmetry

    NASA Astrophysics Data System (ADS)

    Kitzmann, D.; Bolte, J.; Patzer, A. B. C.

    2016-11-01

    The discontinuous Galerkin finite element method (DG-FEM) is successfully applied to treat a broad variety of transport problems numerically. In this work, we use the full capacity of the DG-FEM to solve the radiative transfer equation in spherical symmetry. We present a discontinuous Galerkin method to directly solve the spherically symmetric radiative transfer equation as a two-dimensional problem. The transport equation in spherical atmospheres is more complicated than in the plane-parallel case owing to the appearance of an additional derivative with respect to the polar angle. The DG-FEM formalism allows for the exact integration of arbitrarily complex scattering phase functions, independent of the angular mesh resolution. We show that the discontinuous Galerkin method is able to describe accurately the radiative transfer in extended atmospheres and to capture discontinuities or complex scattering behaviour which might be present in the solution of certain radiative transfer tasks and can, therefore, cause severe numerical problems for other radiative transfer solution methods.

  1. Intrasystem Analysis Program (IAP) code summaries

    NASA Astrophysics Data System (ADS)

    Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.

    1983-05-01

    This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.

  2. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  3. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  4. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  5. A Formative Evaluation of CU-SeeMe.

    DTIC Science & Technology

    1995-02-01

    CU- SeeMe is a video conferencing software package that was designed and programmed at Cornell University. The program works with the TCP/IP network...protocol and allows two or more parties to conduct a real-time video conference with full audio support. In this paper we evaluate CU- SeeMe through...caused the problem and why This helps in the process of formulating solutions for observed usability problems. All the testing results are combined in the Appendix in an illustrated partial redesign of the CU- SeeMe Interface.

  6. Image registration under translation and rotation in two-dimensional planes using Fourier slice theorem.

    PubMed

    Pohit, M; Sharma, J

    2015-05-10

    Image recognition in the presence of both rotation and translation is a longstanding problem in correlation pattern recognition. Use of log polar transform gives a solution to this problem, but at a cost of losing the vital phase information from the image. The main objective of this paper is to develop an algorithm based on Fourier slice theorem for measuring the simultaneous rotation and translation of an object in a 2D plane. The algorithm is applicable for any arbitrary object shift for full 180° rotation.

  7. Enabling our instruments: accommodation, universal design, and access to participation in research.

    PubMed

    Meyers, A R; Andresen, E M

    2000-12-01

    The objective of this article is to discuss problems related to full participation of people with disabilities in health services and health outcomes research. To show the problems and to suggest solutions, we offer examples from personal research experiences (ours and colleagues'), as well as from published literature, requirements of research agencies, web and news sources, and research participants' feedback. A combination of formal and informal processes can be used to enable future instruments and methods. There are ethical, legal, and methodologic imperatives for research participation enablement.

  8. Excitation of Continuous and Discrete Modes in Incompressible Boundary Layers

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Reshotko, Eli

    1998-01-01

    This report documents the full details of the condensed journal article by Ashpis & Reshotko (JFM, 1990) entitled "The Vibrating Ribbon Problem Revisited." A revised formal solution of the vibrating ribbon problem of hydrodynamic stability is presented. The initial formulation of Gaster (JFM, 1965) is modified by application of the Briggs method and a careful treatment of the complex double Fourier transform inversions. Expressions are obtained in a natural way for the discrete spectrum as well as for the four branches of the continuous spectra. These correspond to discrete and branch-cut singularities in the complex wave-number plane. The solutions from the continuous spectra decay both upstream and downstream of the ribbon, with the decay in the upstream direction being much more rapid than that in the downstream direction. Comments and clarification of related prior work are made.

  9. Dissociative conceptual and quantitative problem solving outcomes across interactive engagement and traditional format introductory physics

    NASA Astrophysics Data System (ADS)

    McDaniel, Mark A.; Stoen, Siera M.; Frey, Regina F.; Markow, Zachary E.; Hynes, K. Mairin; Zhao, Jiuqing; Cahill, Michael J.

    2016-12-01

    The existing literature indicates that interactive-engagement (IE) based general physics classes improve conceptual learning relative to more traditional lecture-oriented classrooms. Very little research, however, has examined quantitative problem-solving outcomes from IE based relative to traditional lecture-based physics classes. The present study included both pre- and post-course conceptual-learning assessments and a new quantitative physics problem-solving assessment that included three representative conservation of energy problems from a first-semester calculus-based college physics course. Scores for problem translation, plan coherence, solution execution, and evaluation of solution plausibility were extracted for each problem. Over 450 students in three IE-based sections and two traditional lecture sections taught at the same university during the same semester participated. As expected, the IE-based course produced more robust gains on a Force Concept Inventory than did the lecture course. By contrast, when the full sample was considered, gains in quantitative problem solving were significantly greater for lecture than IE-based physics; when students were matched on pre-test scores, there was still no advantage for IE-based physics on gains in quantitative problem solving. Further, the association between performance on the concept inventory and quantitative problem solving was minimal. These results highlight that improved conceptual understanding does not necessarily support improved quantitative physics problem solving, and that the instructional method appears to have less bearing on gains in quantitative problem solving than does the kinds of problems emphasized in the courses and homework and the overlap of these problems to those on the assessment.

  10. High order filtering methods for approximating hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1991-01-01

    The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.

  11. A Green's function method for two-dimensional reactive solute transport in a parallel fracture-matrix system

    NASA Astrophysics Data System (ADS)

    Chen, Kewei; Zhan, Hongbin

    2018-06-01

    The reactive solute transport in a single fracture bounded by upper and lower matrixes is a classical problem that captures the dominant factors affecting transport behavior beyond pore scale. A parallel fracture-matrix system which considers the interaction among multiple paralleled fractures is an extension to a single fracture-matrix system. The existing analytical or semi-analytical solution for solute transport in a parallel fracture-matrix simplifies the problem to various degrees, such as neglecting the transverse dispersion in the fracture and/or the longitudinal diffusion in the matrix. The difficulty of solving the full two-dimensional (2-D) problem lies in the calculation of the mass exchange between the fracture and matrix. In this study, we propose an innovative Green's function approach to address the 2-D reactive solute transport in a parallel fracture-matrix system. The flux at the interface is calculated numerically. It is found that the transverse dispersion in the fracture can be safely neglected due to the small scale of fracture aperture. However, neglecting the longitudinal matrix diffusion would overestimate the concentration profile near the solute entrance face and underestimate the concentration profile at the far side. The error caused by neglecting the longitudinal matrix diffusion decreases with increasing Peclet number. The longitudinal matrix diffusion does not have obvious influence on the concentration profile in long-term. The developed model is applied to a non-aqueous-phase-liquid (DNAPL) contamination field case in New Haven Arkose of Connecticut in USA to estimate the Trichloroethylene (TCE) behavior over 40 years. The ratio of TCE mass stored in the matrix and the injected TCE mass increases above 90% in less than 10 years.

  12. Producing Decisions in Service-User Groups for People with an Intellectual Disability: Two Contrasting Facilitator Styles

    ERIC Educational Resources Information Center

    Antaki, Charles; Finlay, W. M. L.; Sheridan, Emma; Jingree, Treena; Walton, Chris

    2006-01-01

    Service-user groups whose goals include the promotion of self-advocacy for people with an intellectual disability aim, among other things, to encourage service users to identify problems and find solutions. However, service users' contributions to group sessions may not always be full and spontaneous. This presents a dilemma to the facilitator. In…

  13. Finite difference methods for the solution of unsteady potential flows

    NASA Technical Reports Server (NTRS)

    Caradonna, F. X.

    1982-01-01

    Various problems which are confronted in the development of an unsteady finite difference potential code are reviewed mainly in the context of what is done for a typical small disturbance and full potential method. The issues discussed include choice of equations, linearization and conservation, differencing schemes, and algorithm development. A number of applications, including unsteady three dimensional rotor calculations, are demonstrated.

  14. FM Radio; An Oral Communication Project for Migrants in Palm Beach County.

    ERIC Educational Resources Information Center

    Early, L. F.

    This report gives a full description of the broadcasting and operation of WHRS-FM, a FM radio station established by federal grant to serve migrant workers and their children in Palm Beach County, Florida. The goal of the project was to evaluate FM radio as a solution to the serious economic and educational problem of communicating with the…

  15. GOMA 6.0 :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunk, Peter Randall; Rao, Rekha Ranjana; Chen, Ken S

    Goma 6.0 is a finite element program which excels in analyses of multiphysical processes, particularly those involving the major branches of mechanics (viz. fluid/solid mechanics, energy transport and chemical species transport). Goma is based on a full-Newton-coupled algorithm which allows for simultaneous solution of the governing principles, making the code ideally suited for problems involving closely coupled bulk mechanics and interfacial phenomena. Example applications include, but are not limited to, coating and polymer processing flows, super-alloy processing, welding/soldering, electrochemical processes, and solid-network or solution film drying. This document serves as a users guide and reference.

  16. Reduced and simplified chemical kinetics for air dissociation using Computational Singular Perturbation

    NASA Technical Reports Server (NTRS)

    Goussis, D. A.; Lam, S. H.; Gnoffo, P. A.

    1990-01-01

    The Computational Singular Perturbation CSP methods is employed (1) in the modeling of a homogeneous isothermal reacting system and (2) in the numerical simulation of the chemical reactions in a hypersonic flowfield. Reduced and simplified mechanisms are constructed. The solutions obtained on the basis of these approximate mechanisms are shown to be in very good agreement with the exact solution based on the full mechanism. Physically meaningful approximations are derived. It is demonstrated that the deduction of these approximations from CSP is independent of the complexity of the problem and requires no intuition or experience in chemical kinetics.

  17. Improving the Flexibility of Optimization-Based Decision Aiding Frameworks for Integrated Water Resource Management

    NASA Astrophysics Data System (ADS)

    Guillaume, J. H.; Kasprzyk, J. R.

    2013-12-01

    Deep uncertainty refers to situations in which stakeholders cannot agree on the full suite of risks for their system or their probabilities. Additionally, systems are often managed for multiple, conflicting objectives such as minimizing cost, maximizing environmental quality, and maximizing hydropower revenues. Many objective analysis (MOA) uses a quantitative model combined with evolutionary optimization to provide a tradeoff set of potential solutions to a planning problem. However, MOA is often performed using a single, fixed problem conceptualization. Focus on development of a single formulation can introduce an "inertia" into the problem solution, such that issues outside the initial formulation are less likely to ever be addressed. This study uses the Iterative Closed Question Methodology (ICQM) to continuously reframe the optimization problem, providing iterative definition and reflection for stakeholders. By using a series of directed questions to look beyond a problem's existing modeling representation, ICQM seeks to provide a working environment within which it is easy to modify the motivating question, assumptions, and model identification in optimization problems. The new approach helps identify and reduce bottle-necks introduced by properties of both the simulation model and optimization approach that reduce flexibility in generation and evaluation of alternatives. It can therefore help introduce new perspectives on the resolution of conflicts between objectives. The Lower Rio Grande Valley portfolio planning problem is used as a case study.

  18. An analytic cosmology solution of Poincaré gauge gravity

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Chee, Guoying

    2016-06-01

    A cosmology of Poincaré gauge theory is developed. An analytic solution is obtained. The calculation results agree with observation data and can be compared with the ΛCDM model. The cosmological constant puzzle is the coincidence and fine tuning problem are solved naturally at the same time. The cosmological constant turns out to be the intrinsic torsion and curvature of the vacuum universe, and is derived from the theory naturally rather than added artificially. The dark energy originates from geometry, includes the cosmological constant but differs from it. The analytic expression of the state equations of the dark energy and the density parameters of the matter and the geometric dark energy are derived. The full equations of linear cosmological perturbations and the solutions are obtained.

  19. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  20. Modeling crowdsourcing as collective problem solving

    NASA Astrophysics Data System (ADS)

    Guazzini, Andrea; Vilone, Daniele; Donati, Camillo; Nardi, Annalisa; Levnajić, Zoran

    2015-11-01

    Crowdsourcing is a process of accumulating the ideas, thoughts or information from many independent participants, with aim to find the best solution for a given challenge. Modern information technologies allow for massive number of subjects to be involved in a more or less spontaneous way. Still, the full potentials of crowdsourcing are yet to be reached. We introduce a modeling framework through which we study the effectiveness of crowdsourcing in relation to the level of collectivism in facing the problem. Our findings reveal an intricate relationship between the number of participants and the difficulty of the problem, indicating the optimal size of the crowdsourced group. We discuss our results in the context of modern utilization of crowdsourcing.

  1. Multi-dimensional Fokker-Planck equation analysis using the modified finite element method

    NASA Astrophysics Data System (ADS)

    Náprstek, J.; Král, R.

    2016-09-01

    The Fokker-Planck equation (FPE) is a frequently used tool for the solution of cross probability density function (PDF) of a dynamic system response excited by a vector of random processes. FEM represents a very effective solution possibility, particularly when transition processes are investigated or a more detailed solution is needed. Actual papers deal with single degree of freedom (SDOF) systems only. So the respective FPE includes two independent space variables only. Stepping over this limit into MDOF systems a number of specific problems related to a true multi-dimensionality must be overcome. Unlike earlier studies, multi-dimensional simplex elements in any arbitrary dimension should be deployed and rectangular (multi-brick) elements abandoned. Simple closed formulae of integration in multi-dimension domain have been derived. Another specific problem represents the generation of multi-dimensional finite element mesh. Assembling of system global matrices should be subjected to newly composed algorithms due to multi-dimensionality. The system matrices are quite full and no advantages following from their sparse character can be profited from, as is commonly used in conventional FEM applications in 2D/3D problems. After verification of partial algorithms, an illustrative example dealing with a 2DOF non-linear aeroelastic system in combination with random and deterministic excitations is discussed.

  2. Partial branch and bound algorithm for improved data association in multiframe processing

    NASA Astrophysics Data System (ADS)

    Poore, Aubrey B.; Yan, Xin

    1999-07-01

    A central problem in multitarget, multisensor, and multiplatform tracking remains that of data association. Lagrangian relaxation methods have shown themselves to yield near optimal answers in real-time. The necessary improvement in the quality of these solutions warrants a continuing interest in these methods. These problems are NP-hard; the only known methods for solving them optimally are enumerative in nature with branch-and-bound being most efficient. Thus, the development of methods less than a full branch-and-bound are needed for improving the quality. Such methods as K-best, local search, and randomized search have been proposed to improve the quality of the relaxation solution. Here, a partial branch-and-bound technique along with adequate branching and ordering rules are developed. Lagrangian relaxation is used as a branching method and as a method to calculate the lower bound for subproblems. The result shows that the branch-and-bound framework greatly improves the resolution quality of the Lagrangian relaxation algorithm and yields better multiple solutions in less time than relaxation alone.

  3. Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes.

    PubMed

    Jacobson, Mark Z; Delucchi, Mark A; Cameron, Mary A; Frew, Bethany A

    2015-12-08

    This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050-2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide.

  4. Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes

    PubMed Central

    Jacobson, Mark Z.; Delucchi, Mark A.; Cameron, Mary A.; Frew, Bethany A.

    2015-01-01

    This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050–2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide. PMID:26598655

  5. FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation

    NASA Astrophysics Data System (ADS)

    Veltri, M.

    2016-09-01

    This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.

  6. Millimetre-Wave Backhaul for 5G Networks: Challenges and Solutions.

    PubMed

    Feng, Wei; Li, Yong; Jin, Depeng; Su, Li; Chen, Sheng

    2016-06-16

    The trend for dense deployment in future 5G mobile communication networks makes current wired backhaul infeasible owing to the high cost. Millimetre-wave (mm-wave) communication, a promising technique with the capability of providing a multi-gigabit transmission rate, offers a flexible and cost-effective candidate for 5G backhauling. By exploiting highly directional antennas, it becomes practical to cope with explosive traffic demands and to deal with interference problems. Several advancements in physical layer technology, such as hybrid beamforming and full duplexing, bring new challenges and opportunities for mm-wave backhaul. This article introduces a design framework for 5G mm-wave backhaul, including routing, spatial reuse scheduling and physical layer techniques. The associated optimization model, open problems and potential solutions are discussed to fully exploit the throughput gain of the backhaul network. Extensive simulations are conducted to verify the potential benefits of the proposed method for the 5G mm-wave backhaul design.

  7. Hadamard States for the Linearized Yang-Mills Equation on Curved Spacetime

    NASA Astrophysics Data System (ADS)

    Gérard, C.; Wrochna, M.

    2015-07-01

    We construct Hadamard states for the Yang-Mills equation linearized around a smooth, space-compact background solution. We assume the spacetime is globally hyperbolic and its Cauchy surface is compact or equal . We first consider the case when the spacetime is ultra-static, but the background solution depends on time. By methods of pseudodifferential calculus we construct a parametrix for the associated vectorial Klein-Gordon equation. We then obtain Hadamard two-point functions in the gauge theory, acting on Cauchy data. A key role is played by classes of pseudodifferential operators that contain microlocal or spectral type low-energy cutoffs. The general problem is reduced to the ultra-static spacetime case using an extension of the deformation argument of Fulling, Narcowich and Wald. As an aside, we derive a correspondence between Hadamard states and parametrices for the Cauchy problem in ordinary quantum field theory.

  8. Contact problem for a composite material with nacre inspired microstructure

    NASA Astrophysics Data System (ADS)

    Berinskii, Igor; Ryvkin, Michael; Aboudi, Jacob

    2017-12-01

    Bi-material composites with nacre inspired brick and mortar microstructures, characterized by stiff elements of one phase with high aspect ratio separated by thin layers of the second one, are considered. Such microstructure is proved to provide an efficient solution for the problem of a crack arrest. However, contrary to the case of a homogeneous material, an external pressure, applied to a part of the composite boundary, can cause significant tensile stresses which increase the danger of crack nucleation. Investigation of the influence of microstructure parameters on the magnitude of tensile stresses is performed by means of the classical Flamant-like problem of an orthotropic half-plane subjected to a normal external distributed loading. Adequate analysis of this problem represents a serious computational task due to the geometry of the considered layout and the high contrast between the composite constituents. This difficulty is presently circumvented by deriving a micro-to-macro analysis in the framework of which an analytical solution of the auxiliary elasticity problem, followed by the discrete Fourier transform and the higher-order theory are employed. As a result, full scale continuum modeling of both composite constituents without employing any simplifying assumptions is presented. In the framework of the present proposed modeling, the influence of stiff elements aspect ratio on the overall stress distribution is demonstrated.

  9. A disturbance based control/structure design algorithm

    NASA Technical Reports Server (NTRS)

    Mclaren, Mark D.; Slater, Gary L.

    1989-01-01

    Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.

  10. The puzzle of the Krebs citric acid cycle: assembling the pieces of chemically feasible reactions, and opportunism in the design of metabolic pathways during evolution.

    PubMed

    Meléndez-Hevia, E; Waddell, T G; Cascante, M

    1996-09-01

    The evolutionary origin of the Krebs citric acid cycle has been for a long time a model case in the understanding of the origin and evolution of metabolic pathways: How can the emergence of such a complex pathway be explained? A number of speculative studies have been carried out that have reached the conclusion that the Krebs cycle evolved from pathways for amino acid biosynthesis, but many important questions remain open: Why and how did the full pathway emerge from there? Are other alternative routes for the same purpose possible? Are they better or worse? Have they had any opportunity to be developed in cellular metabolism evolution? We have analyzed the Krebs cycle as a problem of chemical design to oxidize acetate yielding reduction equivalents to the respiratory chain to make ATP. Our analysis demonstrates that although there are several different chemical solutions to this problem, the design of this metabolic pathway as it occurs in living cells is the best chemical solution: It has the least possible number of steps and it also has the greatest ATP yielding. Study of the evolutionary possibilities of each one-taking the available material to build new pathways-demonstrates that the emergence of the Krebs cycle has been a typical case of opportunism in molecular evolution. Our analysis proves, therefore, that the role of opportunism in evolution has converted a problem of several possible chemical solutions into a single-solution problem, with the actual Krebs cycle demonstrated to be the best possible chemical design. Our results also allow us to derive the rules under which metabolic pathways emerged during the origin of life.

  11. Microwave beam broadening due to turbulent plasma density fluctuations within the limit of the Born approximation and beyond

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Guidi, L.; Holzhauer, E.; Maj, O.; Poli, E.; Snicker, A.; Weber, H.

    2018-07-01

    Plasma turbulence, and edge density fluctuations in particular, can under certain conditions broaden the cross-section of injected microwave beams significantly. This can be a severe problem for applications relying on well-localized deposition of the microwave power, like the control of MHD instabilities. Here we investigate this broadening mechanism as a function of fluctuation level, background density and propagation length in a fusion-relevant scenario using two numerical codes, the full-wave code IPF-FDMC and the novel wave kinetic equation solver WKBeam. The latter treats the effects of fluctuations using a statistical approach, based on an iterative solution of the scattering problem (Born approximation). The full-wave simulations are used to benchmark this approach. The Born approximation is shown to be valid over a large parameter range, including ITER-relevant scenarios.

  12. Lattice Boltzmann for Airframe Noise Predictions

    NASA Technical Reports Server (NTRS)

    Barad, Michael; Kocheemoolayil, Joseph; Kiris, Cetin

    2017-01-01

    Increase predictive use of High-Fidelity Computational Aero- Acoustics (CAA) capabilities for NASA's next generation aviation concepts. CFD has been utilized substantially in analysis and design for steady-state problems (RANS). Computational resources are extremely challenged for high-fidelity unsteady problems (e.g. unsteady loads, buffet boundary, jet and installation noise, fan noise, active flow control, airframe noise, etc) ü Need novel techniques for reducing the computational resources consumed by current high-fidelity CAA Need routine acoustic analysis of aircraft components at full-scale Reynolds number from first principles Need an order of magnitude reduction in wall time to solution!

  13. Storage and treatment of SNF of Alfa class nuclear submarines: current status and problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ignatiev, Sviatoslav; Zabudko, Alexey; Pankratov, Dmitry

    Available in abstract form only. Full text of publication follows: The current status and main problems associated with storage, defueling and following treatment of spent nuclear fuel (SNF) of Nuclear Submarines (NS) with heavy liquid metal cooled reactors are considered. In the final analysis these solutions could be realized in the form of separate projects to be funded through national and bi- and multilateral funding in the framework of the international collaboration of the Russian Federation on complex utilization of NS and rehabilitation of contaminated objects allocated in the North-West region of Russia. (authors)

  14. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  15. Uncertainty quantification for complex systems with very high dimensional response using Grassmann manifold variations

    NASA Astrophysics Data System (ADS)

    Giovanis, D. G.; Shields, M. D.

    2018-07-01

    This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.

  16. Methods in the study of discrete upper hybrid waves

    NASA Astrophysics Data System (ADS)

    Yoon, P. H.; Ye, S.; Labelle, J.; Weatherwax, A. T.; Menietti, J. D.

    2007-11-01

    Naturally occurring plasma waves characterized by fine frequency structure or discrete spectrum, detected by satellite, rocket-borne instruments, or ground-based receivers, can be interpreted as eigenmodes excited and trapped in field-aligned density structures. This paper overviews various theoretical methods to study such phenomena for a one-dimensional (1-D) density structure. Among the various methods are parabolic approximation, eikonal matching, eigenfunction matching, and full numerical solution based upon shooting method. Various approaches are compared against the full numerical solution. Among the analytic methods it is found that the eigenfunction matching technique best approximates the actual numerical solution. The analysis is further extended to 2-D geometry. A detailed comparative analysis between the eigenfunction matching and fully numerical methods is carried out for the 2-D case. Although in general the two methods compare favorably, significant differences are also found such that for application to actual observations it is prudent to employ the fully numerical method. Application of the methods developed in the present paper to actual geophysical problems will be given in a companion paper.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrada, J.J.; Osborne-Lee, I.W.; Grizzaffi, P.A.

    Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applicationsmore » of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.« less

  18. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2014-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Initial results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  19. Statistical mechanics of the vertex-cover problem

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  20. Cascade flutter analysis with transient response aerodynamics

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Mahajan, Aparajit J.; Keith, Theo G., Jr.; Stefko, George L.

    1991-01-01

    Two methods for calculating linear frequency domain aerodynamic coefficients from a time marching Full Potential cascade solver are developed and verified. In the first method, the Influence Coefficient, solutions to elemental problems are superposed to obtain the solutions for a cascade in which all blades are vibrating with a constant interblade phase angle. The elemental problem consists of a single blade in the cascade oscillating while the other blades remain stationary. In the second method, the Pulse Response, the response to the transient motion of a blade is used to calculate influence coefficients. This is done by calculating the Fourier Transforms of the blade motion and the response. Both methods are validated by comparison with the Harmonic Oscillation method and give accurate results. The aerodynamic coefficients obtained from these methods are used for frequency domain flutter calculations involving a typical section blade structural model. An eigenvalue problem is solved for each interblade phase angle mode and the eigenvalues are used to determine aeroelastic stability. Flutter calculations are performed for two examples over a range of subsonic Mach numbers.

  1. Optimization in Radiation Therapy: Applications in Brachytherapy and Intensity Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    McGeachy, Philip David

    Over 50% of cancer patients require radiation therapy (RT). RT is an optimization problem requiring maximization of the radiation damage to the tumor while minimizing the harm to the healthy tissues. This dissertation focuses on two main RT optimization problems: 1) brachytherapy and 2) intensity modulated radiation therapy (IMRT). The brachytherapy research involved solving a non-convex optimization problem by creating an open-source genetic algorithm optimizer to determine the optimal radioactive seed distribution for a given set of patient volumes and constraints, both dosimetric- and implant-based. The optimizer was tested for a set of 45 prostate brachytherapy patients. While all solutions met the clinical standards, they also benchmarked favorably with those generated by a standard commercial solver. Compared to its compatriot, the salient features of the generated solutions were: slightly reduced prostate coverage, lower dose to the urethra and rectum, and a smaller number of needles required for an implant. Historically, IMRT requires modulation of fluence while keeping the photon beam energy fixed. The IMRT-related investigation in this thesis aimed at broadening the solution space by varying photon energy. The problem therefore involved simultaneous optimization of photon beamlet energy and fluence, denoted by XMRT. Formulating the problem as convex, linear programming was applied to obtain solutions for optimal energy-dependent fluences, while achieving all clinical objectives and constraints imposed. Dosimetric advantages of XMRT over single-energy IMRT in the improved sparing of organs at risk (OARs) was demonstrated in simplified phantom studies. The XMRT algorithm was improved to include clinical dose-volume constraints and clinical studies for prostate and head and neck cancer patients were investigated. Compared to IMRT, XMRT provided improved dosimetric benefit in the prostate case, particularly within intermediate- to low-dose regions (≤ 40 Gy) for OARs. For head and neck cases, XMRT solutions showed no significant disadvantage or advantage over IMRT. The deliverability concerns for the fluence maps generated from XMRT were addressed by incorporating smoothing constraints during the optimization and through successful generation of treatment machine files. Further research is needed to explore the full potential of the XMRT approach to RT.

  2. Asymptotic traveling wave solution for a credit rating migration problem

    NASA Astrophysics Data System (ADS)

    Liang, Jin; Wu, Yuan; Hu, Bei

    2016-07-01

    In this paper, an asymptotic traveling wave solution of a free boundary model for pricing a corporate bond with credit rating migration risk is studied. This is the first study to associate the asymptotic traveling wave solution to the credit rating migration problem. The pricing problem with credit rating migration risk is modeled by a free boundary problem. The existence, uniqueness and regularity of the solution are obtained. Under some condition, we proved that the solution of our credit rating problem is convergent to a traveling wave solution, which has an explicit form. Furthermore, numerical examples are presented.

  3. A protect solution for data security in mobile cloud storage

    NASA Astrophysics Data System (ADS)

    Yu, Xiaojun; Wen, Qiaoyan

    2013-03-01

    It is popular to access the cloud storage by mobile devices. However, this application suffer data security risk, especial the data leakage and privacy violate problem. This risk exists not only in cloud storage system, but also in mobile client platform. To reduce the security risk, this paper proposed a new security solution. It makes full use of the searchable encryption and trusted computing technology. Given the performance limit of the mobile devices, it proposes the trusted proxy based protection architecture. The design basic idea, deploy model and key flows are detailed. The analysis from the security and performance shows the advantage.

  4. Integrated Analytic and Linearized Inverse Kinematics for Precise Full Body Interactions

    NASA Astrophysics Data System (ADS)

    Boulic, Ronan; Raunhardt, Daniel

    Despite the large success of games grounded on movement-based interactions the current state of full body motion capture technologies still prevents the exploitation of precise interactions with complex environments. This paper focuses on ensuring a precise spatial correspondence between the user and the avatar. We build upon our past effort in human postural control with a Prioritized Inverse Kinematics framework. One of its key advantage is to ease the dynamic combination of postural and collision avoidance constraints. However its reliance on a linearized approximation of the problem makes it vulnerable to the well-known full extension singularity of the limbs. In such context the tracking performance is reduced and/or less believable intermediate postural solutions are produced. We address this issue by introducing a new type of analytic constraint that smoothly integrates within the prioritized Inverse Kinematics framework. The paper first recalls the background of full body 3D interactions and the advantages and drawbacks of the linearized IK solution. Then the Flexion-EXTension constraint (FLEXT in short) is introduced for the partial position control of limb-like articulated structures. Comparative results illustrate the interest of this new type of integrated analytical and linearized IK control.

  5. Boundary-Layer Stability Analysis of the Mean Flows Obtained Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liao, Wei; Malik, Mujeeb R.; Lee-Rausch, Elizabeth M.; Li, Fei; Nielsen, Eric J.; Buning, Pieter G.; Chang, Chau-Lyan; Choudhari, Meelan M.

    2012-01-01

    Boundary-layer stability analyses of mean flows extracted from unstructured-grid Navier- Stokes solutions have been performed. A procedure has been developed to extract mean flow profiles from the FUN3D unstructured-grid solutions. Extensive code-to-code validations have been performed by comparing the extracted mean ows as well as the corresponding stability characteristics to the predictions based on structured-grid solutions. Comparisons are made on a range of problems from a simple at plate to a full aircraft configuration-a modified Gulfstream-III with a natural laminar flow glove. The future aim of the project is to extend the adjoint-based design capability in FUN3D to include natural laminar flow and laminar flow control by integrating it with boundary-layer stability analysis codes, such as LASTRAC.

  6. Global strong solutions to the one-dimensional heat-conductive model for planar non-resistive magnetohydrodynamics with large data

    NASA Astrophysics Data System (ADS)

    Li, Yang

    2018-06-01

    In this paper, we consider the initial-boundary value problem to the one-dimensional compressible heat-conductive model for planar non-resistive magnetohydrodynamics. By making full use of the effective viscous flux and an analogue, together with the structure of the equations, global existence and uniqueness of strong solutions are obtained on condition that the initial density is bounded below away from vacuum and the heat conductivity coefficient κ satisfies the growth condition κ _1(1+θ^{α })≤ κ (θ)≤ κ _2(1+θ^{α }),\\quad { for some }0< α < ∞, with κ _1,κ _2 being positive constants. Moreover, global solvability of strong solutions is shown with the initial vacuum. The results are obtained without any smallness restriction to the initial data.

  7. Exact solution of large asymmetric traveling salesman problems.

    PubMed

    Miller, D L; Pekny, J F

    1991-02-15

    The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.

  8. Propagation of Finite Amplitude Sound in Multiple Waveguide Modes.

    NASA Astrophysics Data System (ADS)

    van Doren, Thomas Walter

    1993-01-01

    This dissertation describes a theoretical and experimental investigation of the propagation of finite amplitude sound in multiple waveguide modes. Quasilinear analytical solutions of the full second order nonlinear wave equation, the Westervelt equation, and the KZK parabolic wave equation are obtained for the fundamental and second harmonic sound fields in a rectangular rigid-wall waveguide. It is shown that the Westervelt equation is an acceptable approximation of the full nonlinear wave equation for describing guided sound waves of finite amplitude. A system of first order equations based on both a modal and harmonic expansion of the Westervelt equation is developed for waveguides with locally reactive wall impedances. Fully nonlinear numerical solutions of the system of coupled equations are presented for waveguides formed by two parallel planes which are either both rigid, or one rigid and one pressure release. These numerical solutions are compared to finite -difference solutions of the KZK equation, and it is shown that solutions of the KZK equation are valid only at frequencies which are high compared to the cutoff frequencies of the most important modes of propagation (i.e., for which sound propagates at small grazing angles). Numerical solutions of both the Westervelt and KZK equations are compared to experiments performed in an air-filled, rigid-wall, rectangular waveguide. Solutions of the Westervelt equation are in good agreement with experiment for low source frequencies, at which sound propagates at large grazing angles, whereas solutions of the KZK equation are not valid for these cases. At higher frequencies, at which sound propagates at small grazing angles, agreement between numerical solutions of the Westervelt and KZK equations and experiment is only fair, because of problems in specifying the experimental source condition with sufficient accuracy.

  9. Optical reflection from planetary surfaces as an operator-eigenvalue problem

    USGS Publications Warehouse

    Wildey, R.L.

    1986-01-01

    The understanding of quantum mechanical phenomena has come to rely heavily on theory framed in terms of operators and their eigenvalue equations. This paper investigates the utility of that technique as related to the reciprocity principle in diffuse reflection. The reciprocity operator is shown to be unitary and Hermitian; hence, its eigenvectors form a complete orthonormal basis. The relevant eigenvalue is found to be infinitely degenerate. A superposition of the eigenfunctions found from solution by separation of variables is inadequate to form a general solution that can be fitted to a one-dimensional boundary condition, because the difficulty of resolving the reciprocity operator into a superposition of independent one-dimensional operators has yet to be overcome. A particular lunar application in the form of a failed prediction of limb-darkening of the full Moon from brightness versus phase illustrates this problem. A general solution is derived which fully exploits the determinative powers of the reciprocity operator as an unresolved two-dimensional operator. However, a solution based on a sum of one-dimensional operators, if possible, would be much more powerful. A close association is found between the reciprocity operator and the particle-exchange operator of quantum mechanics, which may indicate the direction for further successful exploitation of the approach based on the operational calculus. ?? 1986 D. Reidel Publishing Company.

  10. Fast online inverse scattering with Reduced Basis Method (RBM) for a 3D phase grating with specific line roughness

    NASA Astrophysics Data System (ADS)

    Kleemann, Bernd H.; Kurz, Julian; Hetzler, Jochen; Pomplun, Jan; Burger, Sven; Zschiedrich, Lin; Schmidt, Frank

    2011-05-01

    Finite element methods (FEM) for the rigorous electromagnetic solution of Maxwell's equations are known to be very accurate. They possess a high convergence rate for the determination of near field and far field quantities of scattering and diffraction processes of light with structures having feature sizes in the range of the light wavelength. We are using FEM software for 3D scatterometric diffraction calculations allowing the application of a brilliant and extremely fast solution method: the reduced basis method (RBM). The RBM constructs a reduced model of the scattering problem from precalculated snapshot solutions, guided self-adaptively by an error estimator. Using RBM, we achieve an efficiency accuracy of about 10-4 compared to the direct problem with only 35 precalculated snapshots being the reduced basis dimension. This speeds up the calculation of diffraction amplitudes by a factor of about 1000 compared to the conventional solution of Maxwell's equations by FEM. This allows us to reconstruct the three geometrical parameters of our phase grating from "measured" scattering data in a 3D parameter manifold online in a minute having the full FEM accuracy available. Additionally, also a sensitivity analysis or the choice of robust measuring strategies, for example, can be done online in a few minutes.

  11. Ranked solutions to a class of combinatorial optimizations - with applications in mass spectrometry based peptide sequencing

    NASA Astrophysics Data System (ADS)

    Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo

    2006-03-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.

  12. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  13. The use of multigrid techniques in the solution of the Elrod algorithm for a dynamically loaded journal bearing. M.S. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.

    1988-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed, utilizing a multigrid iterative technique. The code is compared with a presently existing direct solution in terms of computational time and accuracy. The model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobssen-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via liquid striations. The mixed nature of the equations (elliptic in the full film zone and nonelliptic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  14. An atomistic simulation scheme for modeling crystal formation from solution.

    PubMed

    Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk

    2006-01-14

    We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.

  15. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    NASA Astrophysics Data System (ADS)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.

  16. Finite difference methods for the solution of unsteady potential flows

    NASA Technical Reports Server (NTRS)

    Caradonna, F. X.

    1985-01-01

    A brief review is presented of various problems which are confronted in the development of an unsteady finite difference potential code. This review is conducted mainly in the context of what is done for a typical small disturbance and full potential methods. The issues discussed include choice of equation, linearization and conservation, differencing schemes, and algorithm development. A number of applications including unsteady three-dimensional rotor calculation, are demonstrated.

  17. Curriculum Adjustments in Information Studies Training Programmes in Africa. Proceedings of the Post-IFLA Conference Seminar (Bonn, West Germany, August 24-28, 1987).

    ERIC Educational Resources Information Center

    Bock, Gunter, Ed.; Huttemann, Lutz, Ed.

    Recommendations formulated and adopted by the participants in this seminar and summaries of discussions at the various sessions are followed by the full text of 17 papers presented at the meeting: (1) "The 'BID' Model Test of the Polytechnic of Hanover: Principles, Structures, Solutions and Problems of New Studies in the Field of…

  18. Penumbral lunar eclipse of September 16, 2016: observing with sunglasses to make it popular

    NASA Astrophysics Data System (ADS)

    Sigismondi, Costantino

    2016-08-01

    The observation of a penumbral lunar eclipse is usually missed for a lack of interest. The real problem is the difficulty to observe it, because the strong luminosity of the full Moon and the eye response is easily saturated, being difficult the detection of the penumbral limit. The solution to use sunglasses, even two or three folded can make this observation very popular;

  19. A review of spectral methods

    NASA Technical Reports Server (NTRS)

    Lustman, L.

    1984-01-01

    An outline for spectral methods for partial differential equations is presented. The basic spectral algorithm is defined, collocation are emphasized and the main advantage of the method, the infinite order of accuracy in problems with smooth solutions are discussed. Examples of theoretical numerical analysis of spectral calculations are presented. An application of spectral methods to transonic flow is presented. The full potential transonic equation is among the best understood among nonlinear equations.

  20. Identifying the stored energy of a hyperelastic structure by using an attenuated Landweber method

    NASA Astrophysics Data System (ADS)

    Seydel, Julia; Schuster, Thomas

    2017-12-01

    We consider the nonlinear inverse problem of identifying the stored energy function of a hyperelastic material from full knowledge of the displacement field as well as from surface sensor measurements. The displacement field is represented as a solution of Cauchy’s equation of motion, which is a nonlinear elastic wave equation. Hyperelasticity means that the first Piola-Kirchhoff stress tensor is given as the gradient of the stored energy function. We assume that a dictionary of suitable functions is available. The aim is to recover the stored energy with respect to this dictionary. The considered inverse problem is of vital interest for the development of structural health monitoring systems which are constructed to detect defects in elastic materials from boundary measurements of the displacement field, since the stored energy encodes the mechanical properties of the underlying structure. In this article we develop a numerical solver using the attenuated Landweber method. We show that the parameter-to-solution map satisfies the local tangential cone condition. This result can be used to prove local convergence of the attenuated Landweber method in the case that the full displacement field is measured. In our numerical experiments we demonstrate how to construct an appropriate dictionary and show that our method is well suited to localize damages in various situations.

  1. Competitive Facility Location with Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2009-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.

  2. Introducing the MCHF/OVRP/SDMP: Multicapacitated/Heterogeneous Fleet/Open Vehicle Routing Problems with Split Deliveries and Multiproducts

    PubMed Central

    Yilmaz Eroglu, Duygu; Caglar Gencosman, Burcu; Cavdur, Fatih; Ozmutlu, H. Cenk

    2014-01-01

    In this paper, we analyze a real-world OVRP problem for a production company. Considering real-world constrains, we classify our problem as multicapacitated/heterogeneous fleet/open vehicle routing problem with split deliveries and multiproduct (MCHF/OVRP/SDMP) which is a novel classification of an OVRP. We have developed a mixed integer programming (MIP) model for the problem and generated test problems in different size (10–90 customers) considering real-world parameters. Although MIP is able to find optimal solutions of small size (10 customers) problems, when the number of customers increases, the problem gets harder to solve, and thus MIP could not find optimal solutions for problems that contain more than 10 customers. Moreover, MIP fails to find any feasible solution of large-scale problems (50–90 customers) within time limits (7200 seconds). Therefore, we have developed a genetic algorithm (GA) based solution approach for large-scale problems. The experimental results show that the GA based approach reaches successful solutions with 9.66% gap in 392.8 s on average instead of 7200 s for the problems that contain 10–50 customers. For large-scale problems (50–90 customers), GA reaches feasible solutions of problems within time limits. In conclusion, for the real-world applications, GA is preferable rather than MIP to reach feasible solutions in short time periods. PMID:25045735

  3. An algorithm for generating modular hierarchical neural network classifiers: a step toward larger scale applications

    NASA Astrophysics Data System (ADS)

    Roverso, Davide

    2003-08-01

    Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.

  4. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    PubMed

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  5. Low-thrust trajectory optimization in a full ephemeris model

    NASA Astrophysics Data System (ADS)

    Cai, Xing-Shan; Chen, Yang; Li, Jun-Feng

    2014-10-01

    The low-thrust trajectory optimization with complicated constraints must be considered in practical engineering. In most literature, this problem is simplified into a two-body model in which the spacecraft is subject to the gravitational force at the center of mass and the spacecraft's own electric propulsion only, and the gravity assist (GA) is modeled as an instantaneous velocity increment. This paper presents a method to solve the fuel-optimal problem of low-thrust trajectory with complicated constraints in a full ephemeris model, which is closer to practical engineering conditions. First, it introduces various perturbations, including a third body's gravity, the nonspherical perturbation and the solar radiation pressure in a dynamic equation. Second, it builds two types of equivalent inner constraints to describe the GA. At the same time, the present paper applies a series of techniques, such as a homotopic approach, to enhance the possibility of convergence of the global optimal solution.

  6. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  7. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types.

    PubMed

    Webb, Margaret E; Little, Daniel R; Cropper, Simon J

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions.

  8. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types

    PubMed Central

    Webb, Margaret E.; Little, Daniel R.; Cropper, Simon J.

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions. PMID:27725805

  9. A full-potential approach to the relativistic single-site Green's function

    DOE PAGES

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...

    2016-07-07

    One major purpose of studying the single-site scattering problem is to obtain the scattering matrices and differential equation solutions indispensable to multiple scattering theory (MST) calculations. On the other hand, the single-site scattering itself is also appealing because it reveals the physical environment experienced by electrons around the scattering center. In this study, we demonstrate a new formalism to calculate the relativistic full-potential single-site Green's function. We implement this method to calculate the single-site density of states and electron charge densities. Lastly, the code is rigorously tested and with the help of Krein's theorem, the relativistic effects and full potentialmore » effects in group V elements and noble metals are thoroughly investigated.« less

  10. A biologically inspired controller to solve the coverage problem in robotics.

    PubMed

    Rañó, Iñaki; Santos, José A

    2017-06-05

    The coverage problem consists on computing a path or trajectory for a robot to pass over all the points in some free area and has applications ranging from floor cleaning to demining. Coverage is solved as a planning problem-providing theoretical validation of the solution-or through heuristic techniques which rely on experimental validation. Through a combination of theoretical results and simulations, this paper presents a novel solution to the coverage problem that exploits the chaotic behaviour of a simple biologically inspired motion controller, the Braitenberg vehicle 2b. Although chaos has been used for coverage, our approach has much less restrictive assumptions about the environment and can be implemented using on-board sensors. First, we prove theoretically that this vehicle-a well known model of animal tropotaxis-behaves as a charge in an electro-magnetic field. The motion equations can be reduced to a Hamiltonian system, and, therefore the vehicle follows quasi-periodic or chaotic trajectories, which pass arbitrarily close to any point in the work-space, i.e. it solves the coverage problem. Secondly, through a set of extensive simulations, we show that the trajectories cover regions of bounded workspaces, and full coverage is achieved when the perceptual range of the vehicle is short. We compare the performance of this new approach with different types of random motion controllers in the same bounded environments.

  11. The individual time trial as an optimal control problem

    PubMed Central

    de Jong, Jenny; Fokkink, Robbert; Olsder, Geert Jan; Schwab, AL

    2017-01-01

    In a cycling time trial, the rider needs to distribute his power output optimally to minimize the time between start and finish. Mathematically, this is an optimal control problem. Even for a straight and flat course, its solution is non-trivial and involves a singular control, which corresponds to a power that is slightly above the aerobic level. The rider must start at full anaerobic power to reach an optimal speed and maintain that speed for the rest of the course. If the course is flat but not straight, then the speed at which the rider can round the bends becomes crucial. PMID:29388631

  12. Orbital optimisation in the perfect pairing hierarchy: applications to full-valence calculations on linear polyacenes

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2018-03-01

    We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.

  13. A sequential solution for anisotropic total variation image denoising with interval constraints

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Noo, Frédéric

    2017-09-01

    We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.

  14. Co-evolution for Problem Simplification

    NASA Technical Reports Server (NTRS)

    Haith, Gary L.; Lohn, Jason D.; Cplombano, Silvano P.; Stassinopoulos, Dimitris

    1999-01-01

    This paper explores a co-evolutionary approach applicable to difficult problems with limited failure/success performance feedback. Like familiar "predator-prey" frameworks this algorithm evolves two populations of individuals - the solutions (predators) and the problems (prey). The approach extends previous work by rewarding only the problems that match their difficulty to the level of solut,ion competence. In complex problem domains with limited feedback, this "tractability constraint" helps provide an adaptive fitness gradient that, effectively differentiates the candidate solutions. The algorithm generates selective pressure toward the evolution of increasingly competent solutions by rewarding solution generality and uniqueness and problem tractability and difficulty. Relative (inverse-fitness) and absolute (static objective function) approaches to evaluating problem difficulty are explored and discussed. On a simple control task, this co-evolutionary algorithm was found to have significant advantages over a genetic algorithm with either a static fitness function or a fitness function that changes on a hand-tuned schedule.

  15. An analytically iterative method for solving problems of cosmic-ray modulation

    NASA Astrophysics Data System (ADS)

    Kolesnyk, Yuriy L.; Bobik, Pavol; Shakhov, Boris A.; Putis, Marian

    2017-09-01

    The development of an analytically iterative method for solving steady-state as well as unsteady-state problems of cosmic-ray (CR) modulation is proposed. Iterations for obtaining the solutions are constructed for the spherically symmetric form of the CR propagation equation. The main solution of the considered problem consists of the zero-order solution that is obtained during the initial iteration and amendments that may be obtained by subsequent iterations. The finding of the zero-order solution is based on the CR isotropy during propagation in the space, whereas the anisotropy is taken into account when finding the next amendments. To begin with, the method is applied to solve the problem of CR modulation where the diffusion coefficient κ and the solar wind speed u are constants with an Local Interstellar Spectra (LIS) spectrum. The solution obtained with two iterations was compared with an analytical solution and with numerical solutions. Finally, solutions that have only one iteration for two problems of CR modulation with u = constant and the same form of LIS spectrum were obtained and tested against numerical solutions. For the first problem, κ is proportional to the momentum of the particle p, so it has the form κ = k0η, where η =p/m_0c. For the second problem, the diffusion coefficient is given in the form κ = k0βη, where β =v/c is the particle speed relative to the speed of light. There was a good matching of the obtained solutions with the numerical solutions as well as with the analytical solution for the problem where κ = constant.

  16. Some solutions of the general three body problem in form space

    NASA Astrophysics Data System (ADS)

    Titov, Vladimir

    2018-05-01

    Some solutions of three body problem with equal masses are first considered in form space. The solutions in usual euclidean space may be restored from these form space solutions. If constant energy h < 0, the trajectories are located inside of Hill's surface. Without loss of generality due to scale symmetry we can set h = -1. Such surface has a simple form in form space. Solutions of isosceles and rectilinear three body problems lie within Hill's curve; periodic solutions of free fall three body problem start in one point of this curve, and finish in another. The solutions are illustrated by number of figures.

  17. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these individual algorithms. Following this discussion, the combined parallel algorithm, known as the unified Lambert tool, is presented and an explanation is given as to how it automatically selects which of the three perturbed solvers to compute the perturbed solution for a particular orbit transfer. The unified Lambert tool may be used to determine a single orbit transfer or for generating of an extremal field map. A case study is presented for a mission that is required to rendezvous with two pieces of orbit debris (spent rocket boosters). The unified Lambert tool software developed in this dissertation is already being utilized by several industrial partners and we are confident that it will play a significant role in practical applications, including solution of Lambert problems that arise in the current applications focused on enhanced space situational awareness.

  18. Investigating the effect of mental set on insight problem solving.

    PubMed

    Ollinger, Michael; Jones, Gary; Knoblich, Günther

    2008-01-01

    Mental set is the tendency to solve certain problems in a fixed way based on previous solutions to similar problems. The moment of insight occurs when a problem cannot be solved using solution methods suggested by prior experience and the problem solver suddenly realizes that the solution requires different solution methods. Mental set and insight have often been linked together and yet no attempt thus far has systematically examined the interplay between the two. Three experiments are presented that examine the extent to which sets of noninsight and insight problems affect the subsequent solutions of insight test problems. The results indicate a subtle interplay between mental set and insight: when the set involves noninsight problems, no mental set effects are shown for the insight test problems, yet when the set involves insight problems, both facilitation and inhibition can be seen depending on the type of insight problem presented in the set. A two process model is detailed to explain these findings that combines the representational change mechanism with that of proceduralization.

  19. Computational complexity in entanglement transformations

    NASA Astrophysics Data System (ADS)

    Chitambar, Eric A.

    In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are obtained by encoding well-studied computational problems such as polynomial identity testing and tensor rank into questions of entanglement transformation. In this way, entanglement theory provides a physical manifestation of some of the most puzzling and abstract classical computation questions.

  20. Student Credit Card Debt in the 21st Century: Options for Financial Aid Administrators.

    ERIC Educational Resources Information Center

    Oleson, Mark

    2001-01-01

    Provides multiple workable solutions financial aid offices can offer students throughout their college experience to deal with debt: preventive solutions for avoiding problems with credit card debt, holistic solutions for other related problems, and remedial solutions for existing problems. (EV)

  1. Fundamental solution of the problem of linear programming and method of its determination

    NASA Technical Reports Server (NTRS)

    Petrunin, S. V.

    1978-01-01

    The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.

  2. Electrodynamics; Problems and solutions

    NASA Astrophysics Data System (ADS)

    Ilie, Carolina C.; Schrecengost, Zachariah S.

    2018-05-01

    This book of problems and solutions is a natural continuation of Ilie and Schrecengost's first book Electromagnetism: Problems and Solutions. Aimed towards students who would like to work independently on more electrodynamics problems in order to deepen their understanding and problem-solving skills, this book discusses main concepts and techniques related to Maxwell's equations, conservation laws, electromagnetic waves, potentials and fields, and radiation.

  3. Coupling between fluid dynamics and energy addition in arcjet and microwave thrusters

    NASA Technical Reports Server (NTRS)

    Micci, M. M.

    1986-01-01

    A new approach to numerically solving the problem of the constricted electric arcjet is presented. An Euler Implicit finite difference scheme is used to solve the full compressible Navier Stokes equations in two dimensions. The boundary and initial conditions represent the constrictor section of the arcjet, and hydrogen is used as a propellant. The arc is modeled as a Gaussian distribution across the centerline of the constrictor. Temperature, pressure and velocity profiles for steady state converged solutions show both axial and radial changes in distributions resulting from their interaction with the arc energy source for specific input conditions. The temperature rise is largest at the centerline where there is a the greatest concentration arc energy. The solution does not converge for all initial inputs and the limitations in the range of obtainable solutions are discussed.

  4. The existence of semiregular solutions to elliptic spectral problems with discontinuous nonlinearities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlenko, V N; Potapov, D K

    2015-09-30

    This paper is concerned with the existence of semiregular solutions to the Dirichlet problem for an equation of elliptic type with discontinuous nonlinearity and when the differential operator is not assumed to be formally self-adjoint. Theorems on the existence of semiregular (positive and negative) solutions for the problem under consideration are given, and a principle of upper and lower solutions giving the existence of semiregular solutions is established. For positive values of the spectral parameter, elliptic spectral problems with discontinuous nonlinearities are shown to have nontrivial semiregular (positive and negative) solutions. Bibliography: 32 titles.

  5. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  6. Multiscale Analysis in the Compressible Rotating and Heat Conducting Fluids

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Sam; Maltese, David; Novotný, Antonín

    2017-06-01

    We consider the full Navier-Stokes-Fourier system under rotation in the singular regime of small Mach and Rossby, and large Reynolds and Péclet numbers, with ill prepared initial data on an infinite straight 3-D layer rotating with respect to the axis orthogonal to the layer. We perform the singular limit in the framework of weak solutions and identify the 2-D Euler-Boussinesq system as the target problem.

  7. Big Data Challenges for Large Radio Arrays

    NASA Technical Reports Server (NTRS)

    Jones, Dayton L.; Wagstaff, Kiri; Thompson, David; D'Addario, Larry; Navarro, Robert; Mattmann, Chris; Majid, Walid; Lazio, Joseph; Preston, Robert; Rebbapragada, Umaa

    2012-01-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields.

  8. An approximate JKR solution for a general contact, including rough contacts

    NASA Astrophysics Data System (ADS)

    Ciavarella, M.

    2018-05-01

    In the present note, we suggest a simple closed form approximate solution to the adhesive contact problem under the so-called JKR regime. The derivation is based on generalizing the original JKR energetic derivation assuming calculation of the strain energy in adhesiveless contact, and unloading at constant contact area. The underlying assumption is that the contact area distributions are the same as under adhesiveless conditions (for an appropriately increased normal load), so that in general the stress intensity factors will not be exactly equal at all contact edges. The solution is simply that the indentation is δ =δ1 -√{ 2 wA‧ /P″ } where w is surface energy, δ1 is the adhesiveless indentation, A‧ is the first derivative of contact area and P‧‧ the second derivative of the load with respect to δ1. The solution only requires macroscopic quantities, and not very elaborate local distributions, and is exact in many configurations like axisymmetric contacts, but also sinusoidal waves contact and correctly predicts some features of an ideal asperity model used as a test case and not as a real description of a rough contact problem. The solution permits therefore an estimate of the full solution for elastic rough solids with Gaussian multiple scales of roughness, which so far was lacking, using known adhesiveless simple results. The result turns out to depend only on rms amplitude and slopes of the surface, and as in the fractal limit, slopes would grow without limit, tends to the adhesiveless result - although in this limit the JKR model is inappropriate. The solution would also go to adhesiveless result for large rms amplitude of roughness hrms, irrespective of the small scale details, and in agreement with common sense, well known experiments and previous models by the author.

  9. Common aero vehicle autonomous reentry trajectory optimization satisfying waypoint and no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Jorris, Timothy R.

    2007-12-01

    To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.

  10. Commentary: Missing the elephant in my office: recommendations for part-time careers in academic medicine.

    PubMed

    Helitzer, Deborah

    2009-10-01

    Several recent articles in this journal, including the article by Linzer and colleagues in this issue, discuss and promote the concept of part-time careers in academic medicine as a solution to the need to achieve a work-life balance and to address the changing demographics of academic medicine. The article by Linzer and colleagues presents the consensus of a task force that attempted to address practical considerations for part-time work in academic internal medicine. Missing from these discussions, however, are a consensus on the definition of part-time work, consideration of how such strategies would be available to single parents, how time or resources will be allocated to part-time faculty to participate in professional associations, develop professional networks, and maintain currency in their field, and how part-time work can allow for the development of expertise in research and scholarly activity. Most important, the discussions about the part-time solution do not address the root cause of dissatisfaction and attrition: the ever-increasing and unsustainable workload of full-time faculty. The realization that an academic full-time career requires a commitment of 80 hours per week begs the question of whether part-time faculty would agree to work 40 hours a week for part-time pay. The historical underpinnings of the current situation, the implications of part-time solutions for the academy, and the consequences of choosing part-time work as the primary solution are discussed. Alternative strategies for addressing some of the problems facing full-time faculty are proposed.

  11. A fast algorithm for solving a linear feasibility problem with application to Intensity-Modulated Radiation Therapy.

    PubMed

    Herman, Gabor T; Chen, Wei

    2008-03-01

    The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.

  12. A Generalization of the Karush-Kuhn-Tucker Theorem for Approximate Solutions of Mathematical Programming Problems Based on Quadratic Approximation

    NASA Astrophysics Data System (ADS)

    Voloshinov, V. V.

    2018-03-01

    In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.

  13. Analysis of an operator-differential model for magnetostrictive energy harvesting

    NASA Astrophysics Data System (ADS)

    Davino, D.; Krejčí, P.; Pimenov, A.; Rachinskii, D.; Visone, C.

    2016-10-01

    We present a model of, and analysis of an optimization problem for, a magnetostrictive harvesting device which converts mechanical energy of the repetitive process such as vibrations of the smart material to electrical energy that is then supplied to an electric load. The model combines a lumped differential equation for a simple electronic circuit with an operator model for the complex constitutive law of the magnetostrictive material. The operator based on the formalism of the phenomenological Preisach model describes nonlinear saturation effects and hysteresis losses typical of magnetostrictive materials in a thermodynamically consistent fashion. We prove well-posedness of the full operator-differential system and establish global asymptotic stability of the periodic regime under periodic mechanical forcing that represents mechanical vibrations due to varying environmental conditions. Then we show the existence of an optimal solution for the problem of maximization of the output power with respect to a set of controllable parameters (for the periodically forced system). Analytical results are illustrated with numerical examples of an optimal solution.

  14. Buffet Load Alleviation

    NASA Technical Reports Server (NTRS)

    Ryall, T. G.; Moses, R. W.; Hopkins, M. A.; Henderson, D.; Zimcik, D. G.; Nitzsche, F.

    2004-01-01

    High performance aircraft are, by their very nature, often required to undergo maneuvers involving high angles of attack. Under these conditions unsteady vortices emanating from the wing and the fuselage will impinge on the twin fins (required for directional stability) causing excessive buffet loads, in some circumstances, to be applied to the aircraft. These loads result in oscillatory stresses, which may cause significant amounts of fatigue damage. Active control is a possible solution to this important problem. A full-scale test was carried out on an F/A-18 fuselage and fins using piezoceramic actuators to control the vibrations. Buffet loads were simulated using very powerful electromagnetic shakers. The first phase of this test was concerned with the open loop system identification whereas the second stage involved implementing linear time invariant control laws. This paper looks at some of the problems encountered as well as the corresponding solutions and some results. It is expected that flight trials of a similar control system to alleviate buffet will occur as early as 2001.

  15. Millimetre-Wave Backhaul for 5G Networks: Challenges and Solutions

    PubMed Central

    Feng, Wei; Li, Yong; Jin, Depeng; Su, Li; Chen, Sheng

    2016-01-01

    The trend for dense deployment in future 5G mobile communication networks makes current wired backhaul infeasible owing to the high cost. Millimetre-wave (mm-wave) communication, a promising technique with the capability of providing a multi-gigabit transmission rate, offers a flexible and cost-effective candidate for 5G backhauling. By exploiting highly directional antennas, it becomes practical to cope with explosive traffic demands and to deal with interference problems. Several advancements in physical layer technology, such as hybrid beamforming and full duplexing, bring new challenges and opportunities for mm-wave backhaul. This article introduces a design framework for 5G mm-wave backhaul, including routing, spatial reuse scheduling and physical layer techniques. The associated optimization model, open problems and potential solutions are discussed to fully exploit the throughput gain of the backhaul network. Extensive simulations are conducted to verify the potential benefits of the proposed method for the 5G mm-wave backhaul design. PMID:27322265

  16. Brane with variable tension as a possible solution to the problem of the late cosmic acceleration

    NASA Astrophysics Data System (ADS)

    García-Aspeitia, Miguel A.; Hernandez-Almada, A.; Magaña, Juan; Amante, Mario H.; Motta, V.; Martínez-Robles, C.

    2018-05-01

    Braneworld models have been proposed as a possible solution to the problem of the accelerated expansion of the Universe. The idea is to dispense the dark energy (DE) and drive the late-time cosmic acceleration with a five-dimensional geometry. We investigate a brane model with variable brane tension as a function of redshift called chrono-brane. We propose the polynomial λ =(1 +z )n function inspired in tracker-scalar-field potentials. To constrain the n exponent we use the latest observational Hubble data from cosmic chronometers, Type Ia Supernovae from the full joint-light-analysis sample, baryon acoustic oscillations and the posterior distance from the cosmic microwave background of Planck 2015 measurements. A joint analysis of these data estimates n ≃6.19 ±0.12 which generates a DE-like (cosmological-constantlike at late times) term, in the Friedmann equation arising from the extra dimensions. This model is consistent with these data and can drive the Universe to an accelerated phase at late times.

  17. Using Diagrams as Tools for the Solution of Non-Routine Mathematical Problems

    ERIC Educational Resources Information Center

    Pantziara, Marilena; Gagatsis, Athanasios; Elia, Iliada

    2009-01-01

    The Mathematics education community has long recognized the importance of diagrams in the solution of mathematical problems. Particularly, it is stated that diagrams facilitate the solution of mathematical problems because they represent problems' structure and information (Novick & Hurley, 2001; Diezmann, 2005). Novick and Hurley were the first…

  18. Chimpanzee Problem-Solving: A Test for Comprehension.

    ERIC Educational Resources Information Center

    Premack, David; Woodruff, Guy

    1978-01-01

    Investigates a chimpanzee's capacity to recognize representations of problems and solutions, as well as its ability to perceive the relationship between each type of problem and its appropriate solutions using televised programs and photographic solutions. (HM)

  19. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  20. Electromagnetism; Problems and solutions

    NASA Astrophysics Data System (ADS)

    Ilie, Carolina C.; Schrecengost, Zachariah S.

    2016-11-01

    Electromagnetism: Problems and solutions is an ideal companion book for the undergraduate student-sophomore, junior, or senior-who may want to work on more problems and receive immediate feedback while studying. Each chapter contains brief theoretical notes followed by the problem text with the solution and ends with a brief bibliography. Also presented are problems more general in nature, which may be a bit more challenging.

  1. Transfer of Solutions to Conditional Probability Problems: Effects of Example Problem Format, Solution Format, and Problem Context

    ERIC Educational Resources Information Center

    Chow, Alan F.; Van Haneghan, James P.

    2016-01-01

    This study reports the results of a study examining how easily students are able to transfer frequency solutions to conditional probability problems to novel situations. University students studied either a problem solved using the traditional Bayes formula format or using a natural frequency (tree diagram) format. In addition, the example problem…

  2. On making cuts for magnetic scalar potentials in multiply connected regions

    NASA Astrophysics Data System (ADS)

    Kotiuga, P. R.

    1987-04-01

    The problem of making cuts is of importance to scalar potential formulations of three-dimensional eddy current problems. Its heuristic solution has been known for a century [J. C. Maxwell, A Treatise on Electricity and Magnetism, 3rd ed. (Clarendon, Oxford, 1981), Chap. 1, Article 20] and in the last decade, with the use of finite element methods, a restricted combinatorial variant has been proposed and solved [M. L. Brown, Int. J. Numer. Methods Eng. 20, 665 (1984)]. This problem, in its full generality, has never received a rigorous mathematical formulation. This paper presents such a formulation and outlines a rigorous proof of existence. The technique used in the proof expose the incredible intricacy of the general problem and the restrictive assumptions of Brown [Int. J. Numer. Methods Eng. 20, 665 (1984)]. Finally, the results make rigorous Kotiuga's (Ph. D. Thesis, McGill University, Montreal, 1984) heuristic interpretation of cuts and duality theorems via intersection matrices.

  3. Efficiently approximating the Pareto frontier: Hydropower dam placement in the Amazon basin

    USGS Publications Warehouse

    Wu, Xiaojian; Gomes-Selman, Jonathan; Shi, Qinru; Xue, Yexiang; Garcia-Villacorta, Roosevelt; Anderson, Elizabeth; Sethi, Suresh; Steinschneider, Scott; Flecker, Alexander; Gomes, Carla P.

    2018-01-01

    Real–world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small > 0 on treestructured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.

  4. Capabilities of Fully Parallelized MHD Stability Code MARS

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2016-10-01

    Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.

  5. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2015-11-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.

  6. An Algorithm for Integrated Subsystem Embodiment and System Synthesis

    NASA Technical Reports Server (NTRS)

    Lewis, Kemper

    1997-01-01

    Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.

  7. Crowdsourcing for Challenging Technical Problems - It Works!

    NASA Technical Reports Server (NTRS)

    Davis, Jeffrey R.

    2011-01-01

    The NASA Johnson Space Center Space Life Sciences Directorate (SLSD) and Wyle Integrated Science and Engineering (Wyle) will conduct a one-day business cluster at the 62nd IAC so that IAC attendees will understand the benefits of open innovation (crowdsourcing), review successful results of conducting technical challenges in various open innovation projects, and learn how an organization can effectively deploy these new problem solving tools to innovate more efficiently and effectively. Results from both the SLSD open innovation pilot program and the open innovation workshop conducted by the NASA Human Health and Performance Center will be discussed. NHHPC members will be recruited to participate in the business cluster (see membership http://nhhpc.nasa.gov) and as IAF members. Crowdsourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by the organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the SLSD, with the support of Wyle, established and implemented pilot projects in open innovation (crowdsourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, called Challenges by some open innovation service providers, and were then posted externally to seek solutions to these problems. In addition, an open call was issued internally to NASA employees Agency wide (11 Field Centers and NASA HQ) using an open innovation service provider crowdsourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowdsourcing platform designed for use internal to an organization and customized for NASA use, and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging.

  8. Well-posedness, linear perturbations, and mass conservation for the axisymmetric Einstein equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dain, Sergio; Ortiz, Omar E.; Facultad de Matematica, Astronomia y Fisica, FaMAF, Universidad Nacional de Cordoba, Instituto de Fisica Enrique Gaviola, IFEG, CONICET, Ciudad Universitaria

    2010-02-15

    For axially symmetric solutions of Einstein equations there exists a gauge which has the remarkable property that the total mass can be written as a conserved, positive definite, integral on the spacelike slices. The mass integral provides a nonlinear control of the variables along the whole evolution. In this gauge, Einstein equations reduce to a coupled hyperbolic-elliptic system which is formally singular at the axis. As a first step in analyzing this system of equations we study linear perturbations on a flat background. We prove that the linear equations reduce to a very simple system of equations which provide, thoughmore » the mass formula, useful insight into the structure of the full system. However, the singular behavior of the coefficients at the axis makes the study of this linear system difficult from the analytical point of view. In order to understand the behavior of the solutions, we study the numerical evolution of them. We provide strong numerical evidence that the system is well-posed and that its solutions have the expected behavior. Finally, this linear system allows us to formulate a model problem which is physically interesting in itself, since it is connected with the linear stability of black hole solutions in axial symmetry. This model can contribute significantly to solve the nonlinear problem and at the same time it appears to be tractable.« less

  9. Elasticity solutions for a class of composite laminate problems with stress singularities

    NASA Technical Reports Server (NTRS)

    Wang, S. S.

    1983-01-01

    A study on the fundamental mechanics of fiber-reinforced composite laminates with stress singularities is presented. Based on the theory of anisotropic elasticity and Lekhnitskii's complex-variable stress potentials, a system of coupled governing partial differential equations are established. An eigenfunction expansion method is introduced to determine the orders of stress singularities in composite laminates with various geometric configurations and material systems. Complete elasticity solutions are obtained for this class of singular composite laminate mechanics problems. Homogeneous solutions in eigenfunction series and particular solutions in polynomials are presented for several cases of interest. Three examples are given to illustrate the method of approach and the basic nature of the singular laminate elasticity solutions. The first problem is the well-known laminate free-edge stress problem, which has a rather weak stress singularity. The second problem is the important composite delamination problem, which has a strong crack-tip stress singularity. The third problem is the commonly encountered bonded composite joints, which has a complex solution structure with moderate orders of stress singularities.

  10. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  11. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  12. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.; Brewe, David E.

    1988-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  13. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, C. M.; Brewe, D. E.

    1989-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  14. Geometric Series: A New Solution to the Dog Problem

    ERIC Educational Resources Information Center

    Dion, Peter; Ho, Anthony

    2013-01-01

    This article describes what is often referred to as the dog, beetle, mice, ant, or turtle problem. Solutions to this problem exist, some being variations of each other, which involve mathematics of a wide range of complexity. Herein, the authors describe the intuitive solution and the calculus solution and then offer a completely new solution…

  15. The Vertical Linear Fractional Initialization Problem

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    1999-01-01

    This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.

  16. Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation

    NASA Technical Reports Server (NTRS)

    Mook, D. J.; Lew, Jiann-Shiun

    1991-01-01

    Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.

  17. High order solution of Poisson problems with piecewise constant coefficients and interface jumps

    NASA Astrophysics Data System (ADS)

    Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben

    2017-04-01

    We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.

  18. Efficient Fourier-based algorithms for time-periodic unsteady problems

    NASA Astrophysics Data System (ADS)

    Gopinath, Arathi Kamath

    2007-12-01

    This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely, combinations of neighbor's blade passing. An appropriate set of frequencies can be chosen by the analyst/designer based on a trade-off between accuracy and computational resources available. A cost comparison with a time-accurate computation for an Euler calculation on a two-dimensional multi-stage compressor obtained an order of magnitude savings, and a RANS calculation on a three-dimensional single-stage compressor achieved two orders of magnitude savings, with comparable accuracy.

  19. Benchmark problems and solutions

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1995-01-01

    The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.

  20. The covariance matrix for the solution vector of an equality-constrained least-squares problem

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1976-01-01

    Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'

  1. Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; Simutė, S.

    2017-12-01

    We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.

  2. Examining the Preparatory Effects of Problem Generation and Solution Generation on Learning from Instruction

    ERIC Educational Resources Information Center

    Kapur, Manu

    2018-01-01

    The goal of this paper is to isolate the preparatory effects of problem-generation from solution generation in problem-posing contexts, and their underlying mechanisms on learning from instruction. Using a randomized-controlled design, students were assigned to one of two conditions: (a) problem-posing with solution generation, where they…

  3. Theoretical investigation of EM wave generation and radiation in the ULF, ELF, and VLF bands by the electrodynamic orbiting tether

    NASA Technical Reports Server (NTRS)

    Estes, Robert D.; Grossi, Mario D.

    1989-01-01

    The problem of electromagnetic wave generation by an electrodynamic tethered satellite system is important both for the ordinary operation of such systems and for their possible application as orbiting transmitters. The tether's ionospheric circuit closure problem is closely linked with the propagation of charge-carrying electromagnetic wave packets away from the tethered system. Work is reported which represents a step towards a solution to the problem that takes into account the effects of boundaries and of vertical variations in plasma density, collision frequencies, and ion species. The theory of Alfen wave packet generation by an electrodynamic tethered system in an infinite plasma medium is reviewed, and brief summary of previous work on the problem is given. The consequences of the presence of the boundaries and the vertical nonuniformity are then examined. One of the most significant new features to emerge when ion-neutral collisions are taken into account is the coupling of the Alfven waves to the fast magnetosonic wave. This latter wave is important, as it may be confined by vertical variations in the Alfven speed to a sort of leaky ionospheric wave guide, the resonances of which could be of great importance to the signal received on the Earth's surface. The infinite medium solution for this case where the (uniform) geomagnetic field makes an arbitrary angle with the vertical is taken as the incident wave-packet. Even without a full solution, a number of conclusions can be drawn, the most important of which may be that the electromagnetic field associated with the operation of a steady-current tethered system will probably be too weak to detect on the Earth's surface, even for large tethered currents. This is due to the total reflection of the incident wave at the atmospheric boundary and the inability of a steady-current tethered system to excite the ionospheric wave-guide. An outline of the approach to the numerical problem is given. The use of numerical integrations and boundary conditions consistent with a conducting Earth is proposed to obtain the solution for the horizontal electromagnetic field components at the boundary of the ionosphere with the atmospheric cavity.

  4. Solving the Nonlocality Riddle by Conformal Quantum Geometrodynamics

    NASA Astrophysics Data System (ADS)

    Santamato, Enrico; de Martini, Francesco

    2012-01-01

    Since the 1935 proposal by Einstein, Podolsky and Rosen the riddle of nonlocality, today demonstrated by the violation of Bell's inequalities within innumerable experiments, has been a cause of concern and confusion within the debate over the foundations of quantum mechanics. The present paper tackles the problem by a nonrelativistic approach based on conformal differential geometry applied to the solution of the dynamical problem of two entangled spin 1/2 particles. It is found that the quantum nonlocality may be understood on the basis of a conformal quantum geometrodynamics acting necessarily on the full "configuration space" of the entangled particles. At the end, the violation of the Bell inequalities is demonstrated without making recourse to the common nonlocality paradigm.

  5. Information geometry and its application to theoretical statistics and diffusion tensor magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wisniewski, Nicholas Andrew

    This dissertation is divided into two parts. First we present an exact solution to a generalization of the Behrens-Fisher problem by embedding the problem in the Riemannian manifold of Normal distributions. From this we construct a geometric hypothesis testing scheme. Secondly we investigate the most commonly used geometric methods employed in tensor field interpolation for DT-MRI analysis and cardiac computer modeling. We computationally investigate a class of physiologically motivated orthogonal tensor invariants, both at the full tensor field scale and at the scale of a single interpolation by doing a decimation/interpolation experiment. We show that Riemannian-based methods give the best results in preserving desirable physiological features.

  6. X-ray EM simulation tool for ptychography dataset construction

    NASA Astrophysics Data System (ADS)

    Stoevelaar, L. Pjotr; Gerini, Giampiero

    2018-03-01

    In this paper, we present an electromagnetic full-wave modeling framework, as a support EM tool providing data sets for X-ray ptychographic imaging. Modeling the entire scattering problem with Finite Element Method (FEM) tools is, in fact, a prohibitive task, because of the large area illuminated by the beam (due to the poor focusing power at these wavelengths) and the very small features to be imaged. To overcome this problem, the spectrum of the illumination beam is decomposed into a discrete set of plane waves. This allows reducing the electromagnetic modeling volume to the one enclosing the area to be imaged. The total scattered field is reconstructed by superimposing the solutions for each plane wave illumination.

  7. A comparative study of Full Navier-Stokes and Reduced Navier-Stokes analyses for separating flows within a diffusing inlet S-duct

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Reddy, D. R.; Kapoor, K.

    1993-01-01

    A three-dimensional implicit Full Navier-Stokes (FNS) analysis and a 3D Reduced Navier Stokes (RNS) initial value space marching solution technique has been applied to a class of separated flow problems within a diffusing S-duct configuration characterized by vortex-liftoff. Both the FNS and the RNS solution technique were able to capture the overall flow physics of vortex lift-off, and gave remarkably similar results which agreed reasonably well with the experimental measured averaged performance parameters of engine face total pressure recovery and distortion. However, the Full Navier-Stokes and Reduced Navier-Stokes also consistently predicted separation further downstream in the M2129 inlet S-duct than was indicated by experimental data, thus compensating errors were present in the two Navier-Stokes analyses. The difficulties encountered in the Navier-Stokes separations analyses of the M2129 inlet S-duct center primarily on turbulence model issues, and these focused on two distinct but different phenomena, namely, (1) characterization of low skin friction adverse pressure gradient flows, and (2) description of the near wall behavior of flows characterized by vortex lift-off.

  8. The potential application of the blackboard model of problem solving to multidisciplinary design

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1989-01-01

    The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.

  9. Optimal Electrodynamic Tether Phasing Maneuvers

    NASA Technical Reports Server (NTRS)

    Bitzer, Matthew S.; Hall, Christopher D.

    2007-01-01

    We study the minimum-time orbit phasing maneuver problem for a constant-current electrodynamic tether (EDT). The EDT is assumed to be a point mass and the electromagnetic forces acting on the tether are always perpendicular to the local magnetic field. After deriving and non-dimensionalizing the equations of motion, the only input parameters become current and the phase angle. Solution examples, including initial Lagrange costates, time of flight, thrust plots, and thrust angle profiles, are given for a wide range of current magnitudes and phase angles. The two-dimensional cases presented use a non-tilted magnetic dipole model, and the solutions are compared to existing literature. We are able to compare similar trajectories for a constant thrust phasing maneuver and we find that the time of flight is longer for the constant thrust case with similar initial thrust values and phase angles. Full three-dimensional solutions, which use a titled magnetic dipole model, are also analyzed for orbits with small inclinations.

  10. Exact solution for the time evolution of network rewiring models

    NASA Astrophysics Data System (ADS)

    Evans, T. S.; Plato, A. D. K.

    2007-05-01

    We consider the rewiring of a bipartite graph using a mixture of random and preferential attachment. The full mean-field equations for the degree distribution and its generating function are given. The exact solution of these equations for all finite parameter values at any time is found in terms of standard functions. It is demonstrated that these solutions are an excellent fit to numerical simulations of the model. We discuss the relationship between our model and several others in the literature, including examples of urn, backgammon, and balls-in-boxes models, the Watts and Strogatz rewiring problem, and some models of zero range processes. Our model is also equivalent to those used in various applications including cultural transmission, family name and gene frequencies, glasses, and wealth distributions. Finally some Voter models and an example of a minority game also show features described by our model.

  11. Finite element solution of optimal control problems with state-control inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1992-01-01

    It is demonstrated that the weak Hamiltonian finite-element formulation is amenable to the solution of optimal control problems with inequality constraints which are functions of both state and control variables. Difficult problems can be treated on account of the ease with which algebraic equations can be generated before having to specify the problem. These equations yield very accurate solutions. Owing to the sparse structure of the resulting Jacobian, computer solutions can be obtained quickly when the sparsity is exploited.

  12. [Are the flight security measures good for the patients? The "sickurity" problem].

    PubMed

    Felkai, Péter

    2010-10-10

    Due to the stiffening requirements of security measures at the airports, prevention of air-travel related illnesses have become more difficult. The backlash effects of restrictions (e.g. fluid and movement restrictions) can trigger or even improve pathophysiological processes. The most advanced security check methods, the full body scan, besides ethical and moral considerations, may induce yet unknown pathological processes. We face the similar problem with the traveller, who becomes ill or injured during the trip. In this case, repatriation is often required, which is usually accomplished by commercial airlines. If patient should be transported by stretcher, it is also available on regular flight, but in this case he/she must be accompanied by a medical professional. This solution raises much more security problem: not only the sick person and the medical team, but even their medical equipments and medicines have to be checked. Due to the lack of standardised regulations the security staff solves the problem by various attempts from emphatic approach till refusal. For these reasons, a clear and exact regulation is needed, which must be based upon medical experts' opinion, and should deal not only with the flight security but with the patient's security, as well. This regulation can cease the patients and their medical accompanied persons' to be defencelessness against local authorities and security services. The same is true for handicapped persons. Author suggests solutions for the problem, balancing between flight security and the patient's "sickurity".

  13. The relation between statistical power and inference in fMRI

    PubMed Central

    Wager, Tor D.; Yarkoni, Tal

    2017-01-01

    Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843

  14. Building University Capacity to Visualize Solutions to Complex Problems in the Arctic

    NASA Astrophysics Data System (ADS)

    Broderson, D.; Veazey, P.; Raymond, V. L.; Kowalski, K.; Prakash, A.; Signor, B.

    2016-12-01

    Rapidly changing environments are creating complex problems across the globe, which are particular magnified in the Arctic. These worldwide challenges can best be addressed through diverse and interdisciplinary research teams. It is incumbent on such teams to promote co-production of knowledge and data-driven decision-making by identifying effective methods to communicate their findings and to engage with the public. Decision Theater North (DTN) is a new semi-immersive visualization system that provides a space for teams to collaborate and develop solutions to complex problems, relying on diverse sets of skills and knowledge. It provides a venue to synthesize the talents of scientists, who gather information (data); modelers, who create models of complex systems; artists, who develop visualizations; communicators, who connect and bridge populations; and policymakers, who can use the visualizations to develop sustainable solutions to pressing problems. The mission of Decision Theater North is to provide a cutting-edge visual environment to facilitate dialogue and decision-making by stakeholders including government, industry, communities and academia. We achieve this mission by adopting a multi-faceted approach reflected in the theater's design, technology, networking capabilities, user support, community relationship building, and strategic partnerships. DTN is a joint project of Alaska's National Science Foundation Experimental Program to Stimulate Competitive Research (NSF EPSCoR) and the University of Alaska Fairbanks (UAF), who have brought the facility up to full operational status and are now expanding its development space to support larger team science efforts. Based in Fairbanks, Alaska, DTN is uniquely poised to address changes taking place in the Arctic and subarctic, and is connected with a larger network of decision theaters that include the Arizona State University Decision Theater Network and the McCain Institute in Washington, DC.

  15. A new approach to impulsive rendezvous near circular orbit

    NASA Astrophysics Data System (ADS)

    Carter, Thomas; Humi, Mayer

    2012-04-01

    A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.

  16. Crowd Sourcing for Challenging Technical Problems and Business Model

    NASA Technical Reports Server (NTRS)

    Davis, Jeffrey R.; Richard, Elizabeth

    2011-01-01

    Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone imaging, microbial detection and even the use of pharmaceuticals for radiation protection. The internal challenges through NASA@Work drew over 6000 participants across all NASA centers. Challenges conducted by each NASA center elicited ideas and solutions from several other NASA centers and demonstrated rapid and efficient participation from employees at multiple centers to contribute to problem solving. Finally, on January 19, 2011, the SLSD conducted a workshop on open collaboration and innovation strategies and best practices through the newly established NASA Human Health and Performance Center (NHHPC). Initial projects will be described leading to a new business model for SLSD.

  17. Solution of Tikhonov's Motion-Separation Problem Using the Modified Newton-Kantorovich Theorem

    NASA Astrophysics Data System (ADS)

    Belolipetskii, A. A.; Ter-Krikorov, A. M.

    2018-02-01

    The paper presents a new way to prove the existence of a solution of the well-known Tikhonov's problem on systems of ordinary differential equations in which one part of the variables performs "fast" motions and the other part, "slow" motions. Tikhonov's problem has been the subject of a large number of works in connection with its applications to a wide range of mathematical models in natural science and economics. Only a short list of publications, which present the proof of the existence of solutions in this problem, is cited. The aim of the paper is to demonstrate the possibility of applying the modified Newton-Kantorovich theorem to prove the existence of a solution in Tikhonov's problem. The technique proposed can be used to prove the existence of solutions of other classes of problems with a small parameter.

  18. Knowledge acquisition for case-based reasoning systems

    NASA Technical Reports Server (NTRS)

    Riesbeck, Christopher K.

    1988-01-01

    Case-based reasoning (CBR) is a simple idea: solve new problems by adapting old solutions to similar problems. The CBR approach offers several potential advantages over rule-based reasoning: rules are not combined blindly in a search for solutions, solutions can be explained in terms of concrete examples, and performance can improve automatically as new problems are solved and added to the case library. Moving CBR for the university research environment to the real world requires smooth interfaces for getting knowledge from experts. Described are the basic elements of an interface for acquiring three basic bodies of knowledge that any case-based reasoner requires: the case library of problems and their solutions, the analysis rules that flesh out input problem specifications so that relevant cases can be retrieved, and the adaptation rules that adjust old solutions to fit new problems.

  19. Transformations of software design and code may lead to reduced errors

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.

    1983-01-01

    The capability of programmers and non-programmers to specify problem solutions by developing example-solutions and also for the programmers by writing computer programs was investigated; each method of specification was accomplished at various levels of problem complexity. The level of difficulty of each problem was reflected by the number of steps needed by the user to develop a solution. Machine processing of the user inputs permitted inferences to be developed about the algorithms required to solve a particular problem. The interactive feedback of processing results led users to a more precise definition of the desired solution. Two participant groups (programmers and bookkeepers/accountants) working with three levels of problem complexity and three levels of processor complexity were used. The experimental task employed required specification of a logic for solution of a Navy task force problem.

  20. Boundary Approximation Methods for Sloving Elliptic Problems on Unbounded Domains

    NASA Astrophysics Data System (ADS)

    Li, Zi-Cai; Mathon, Rudolf

    1990-08-01

    Boundary approximation methods with partial solutions are presented for solving a complicated problem on an unbounded domain, with both a crack singularity and a corner singularity. Also an analysis of partial solutions near the singular points is provided. These methods are easy to apply, have good stability properties, and lead to highly accurate solutions. Hence, boundary approximation methods with partial solutions are recommended for the treatment of elliptic problems on unbounded domains provided that piecewise solution expansions, in particular, asymptotic solutions near the singularities and infinity, can be found.

  1. Existence and non-uniqueness of similarity solutions of a boundary-layer problem

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Lakin, W. D.

    1986-01-01

    A Blasius boundary value problem with inhomogeneous lower boundary conditions f(0) = 0 and f'(0) = - lambda with lambda strictly positive was considered. The Crocco variable formulation of this problem has a key term which changes sign in the interval of interest. It is shown that solutions of the boundary value problem do not exist for values of lambda larger than a positive critical value lambda. The existence of solutions is proven for 0 lambda lambda by considering an equivalent initial value problem. It is found however that for 0 lambda lambda, solutions of the boundary value problem are nonunique. Physically, this nonuniqueness is related to multiple values of the skin friction.

  2. Existence and non-uniqueness of similarity solutions of a boundary layer problem

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Lakin, W. D.

    1984-01-01

    A Blasius boundary value problem with inhomogeneous lower boundary conditions f(0) = 0 and f'(0) = - lambda with lambda strictly positive was considered. The Crocco variable formulation of this problem has a key term which changes sign in the interval of interest. It is shown that solutions of the boundary value problem do not exist for values of lambda larger than a positive critical value lambda. The existence of solutions is proven for 0 lambda lambda by considering an equivalent initial value problem. It is found however that for 0 lambda lambda, solutions of the boundary value problem are nonunique. Physically, this nonuniqueness is related to multiple values of the skin friction.

  3. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1982-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.

  4. Learning automata-based solutions to the nonlinear fractional knapsack problem with applications to optimal resource allocation.

    PubMed

    Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin

    2007-02-01

    This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.

  5. The Role of Content Knowledge in Ill-Structured Problem Solving for High School Physics Students

    NASA Astrophysics Data System (ADS)

    Milbourne, Jeff; Wiebe, Eric

    2018-02-01

    While Physics Education Research has a rich tradition of problem-solving scholarship, most of the work has focused on more traditional, well-defined problems. Less work has been done with ill-structured problems, problems that are better aligned with the engineering and design-based scenarios promoted by the Next Generation Science Standards. This study explored the relationship between physics content knowledge and ill-structured problem solving for two groups of high school students with different levels of content knowledge. Both groups of students completed an ill-structured problem set, using a talk-aloud procedure to narrate their thought process as they worked. Analysis of the data focused on identifying students' solution pathways, as well as the obstacles that prevented them from reaching "reasonable" solutions. Students with more content knowledge were more successful reaching reasonable solutions for each of the problems, experiencing fewer obstacles. These students also employed a greater variety of solution pathways than those with less content knowledge. Results suggest that a student's solution pathway choice may depend on how she perceives the problem.

  6. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus

    One major purpose of studying the single-site scattering problem is to obtain the scattering matrices and differential equation solutions indispensable to multiple scattering theory (MST) calculations. On the other hand, the single-site scattering itself is also appealing because it reveals the physical environment experienced by electrons around the scattering center. In this study, we demonstrate a new formalism to calculate the relativistic full-potential single-site Green's function. We implement this method to calculate the single-site density of states and electron charge densities. Lastly, the code is rigorously tested and with the help of Krein's theorem, the relativistic effects and full potentialmore » effects in group V elements and noble metals are thoroughly investigated.« less

  8. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    The application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems is discussed. The methods consist of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line-, or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to that of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  9. A full field, 3-D velocimeter for microgravity crystallization experiments

    NASA Technical Reports Server (NTRS)

    Brodkey, Robert S.; Russ, Keith M.

    1991-01-01

    The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.

  10. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    This paper presents the application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems. The methods consists of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line- or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to those of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  11. K-TIF: a two-fluid computer program for downcomer flow dynamics. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amsden, A.A.; Harlow, F.H.

    1977-10-01

    The K-TIF computer program has been developed for numerical solution of the time-varying dynamics of steam and water in a pressurized water reactor downcomer. The current status of physical and mathematical modeling is presented in detail. The report also contains a complete description of the numerical solution technique, a full description and listing of the computer program, instructions for its use, with a sample printout for a specific test problem. A series of calculations, performed with no change in the modeling parameters, shows consistent agreement with the experimental trends over a wide range of conditions, which gives confidence to themore » calculations as a basis for investigating the complicated physics of steam-water flows in the downcomer.« less

  12. Asymptotic approximations to posterior distributions via conditional moment equations

    USGS Publications Warehouse

    Yee, J.L.; Johnson, W.O.; Samaniego, F.J.

    2002-01-01

    We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.

  13. Exact Solutions to Several Nonlinear Cases of Generalized Grad-Shafranov Equation for Ideal Magnetohydrodynamic Flows in Axisymmetric Domain

    NASA Astrophysics Data System (ADS)

    Adem, Abdullahi Rashid; Moawad, Salah M.

    2018-05-01

    In this paper, the steady-state equations of ideal magnetohydrodynamic incompressible flows in axisymmetric domains are investigated. These flows are governed by a second-order elliptic partial differential equation as a type of generalized Grad-Shafranov equation. The problem of finding exact equilibria to the full governing equations in the presence of incompressible mass flows is considered. Two different types of constraints on position variables are presented to construct exact solution classes for several nonlinear cases of the governing equations. Some of the obtained results are checked for their applications to magnetic confinement plasma. Besides, they cover many previous configurations and include new considerations about the nonlinearity of magnetic flux stream variables.

  14. Techniques and Software for Monolithic Preconditioning of Moderately-sized Geodynamic Stokes Flow Problems

    NASA Astrophysics Data System (ADS)

    Sanan, Patrick; May, Dave A.; Schenk, Olaf; Bollhöffer, Matthias

    2017-04-01

    Geodynamics simulations typically involve the repeated solution of saddle-point systems arising from the Stokes equations. These computations often dominate the time to solution. Direct solvers are known for their robustness and ``black box'' properties, yet exhibit superlinear memory requirements and time to solution. More complex multilevel-preconditioned iterative solvers have been very successful for large problems, yet their use can require more effort from the practitioner in terms of setting up a solver and choosing its parameters. We champion an intermediate approach, based on leveraging the power of modern incomplete factorization techniques for indefinite symmetric matrices. These provide an interesting alternative in situations in between the regimes where direct solvers are an obvious choice and those where complex, scalable, iterative solvers are an obvious choice. That is, much like their relatives for definite systems, ILU/ICC-preconditioned Krylov methods and ILU/ICC-smoothed multigrid methods, the approaches demonstrated here provide a useful addition to the solver toolkit. We present results with a simple, PETSc-based, open-source Q2-Q1 (Taylor-Hood) finite element discretization, in 2 and 3 dimensions, with the Stokes and Lamé (linear elasticity) saddle point systems. Attention is paid to cases in which full-operator incomplete factorization gives an improvement in time to solution over direct solution methods (which may not even be feasible due to memory limitations), without the complication of more complex (or at least, less-automatic) preconditioners or smoothers. As an important factor in the relevance of these tools is their availability in portable software, we also describe open-source PETSc interfaces to the factorization routines.

  15. Versions of the collocation and least squares method for solving biharmonic equations in non-canonical domains

    NASA Astrophysics Data System (ADS)

    Belyaev, V. A.; Shapeev, V. P.

    2017-10-01

    New versions of the collocations and least squares method of high-order accuracy are proposed and implemented for the numerical solution of the boundary value problems for the biharmonic equation in non-canonical domains. The solution of the biharmonic equation is used for simulating the stress-strain state of an isotropic plate under the action of transverse load. The differential problem is projected into a space of fourth-degree polynomials by the CLS method. The boundary conditions for the approximate solution are put down exactly on the boundary of the computational domain. The versions of the CLS method are implemented on the grids which are constructed in two different ways. It is shown that the approximate solution of problems converges with high order. Thus it matches with high accuracy with the analytical solution of the test problems in the case of known solution in the numerical experiments on the convergence of the solution of various problems on a sequence of grids.

  16. New numerical methods for open-loop and feedback solutions to dynamic optimization problems

    NASA Astrophysics Data System (ADS)

    Ghosh, Pradipto

    The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development is that the resulting control law has an algebraic closed-form structure. The proposed method uses an optimal spatial statistical predictor called universal kriging to construct the surrogate model of a feedback controller, which is capable of quickly predicting an optimal control estimate based on current state (and time) information. With universal kriging, an approximation to the optimal feedback map is computed by conceptualizing a set of state-control samples from pre-computed extremals to be a particular realization of a jointly Gaussian spatial process. Feedback policies are computed for a variety of example dynamic optimization problems in order to evaluate the effectiveness of this methodology. This feedback synthesis approach is found to combine good numerical accuracy with low computational overhead, making it a suitable candidate for real-time applications. Particle swarm and universal kriging are combined for a capstone example, a near optimal, near-admissible, full-state feedback control law is computed and tested for the heat-load-limited atmospheric-turn guidance of an aeroassisted transfer vehicle. The performance of this explicit guidance scheme is found to be very promising; initial errors in atmospheric entry due to simulated thruster misfirings are found to be accurately corrected while closely respecting the algebraic state-inequality constraint.

  17. Problems on Divisibility of Binomial Coefficients

    ERIC Educational Resources Information Center

    Osler, Thomas J.; Smoak, James

    2004-01-01

    Twelve unusual problems involving divisibility of the binomial coefficients are represented in this article. The problems are listed in "The Problems" section. All twelve problems have short solutions which are listed in "The Solutions" section. These problems could be assigned to students in any course in which the binomial theorem and Pascal's…

  18. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  19. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    NASA Astrophysics Data System (ADS)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  20. Transient and asymptotic behaviour of the binary breakage problem

    NASA Astrophysics Data System (ADS)

    Mantzaris, Nikos V.

    2005-06-01

    The general binary breakage problem with power-law breakage functions and two families of symmetric and asymmetric breakage kernels is studied in this work. A useful transformation leads to an equation that predicts self-similar solutions in its asymptotic limit and offers explicit knowledge of the mean size and particle density at each point in dimensionless time. A novel moving boundary algorithm in the transformed coordinate system is developed, allowing the accurate prediction of the full transient behaviour of the system from the initial condition up to the point where self-similarity is achieved, and beyond if necessary. The numerical algorithm is very rapid and its results are in excellent agreement with known analytical solutions. In the case of the symmetric breakage kernels only unimodal, self-similar number density functions are obtained asymptotically for all parameter values and independent of the initial conditions, while in the case of asymmetric breakage kernels, bimodality appears for high degrees of asymmetry and sharp breakage functions. For symmetric and discrete breakage kernels, self-similarity is not achieved. The solution exhibits sustained oscillations with amplitude that depends on the initial condition and the sharpness of the breakage mechanism, while the period is always fixed and equal to ln 2 with respect to dimensionless time.

  1. Building Efficiency Technologies by Tomorrow’s Engineers and Researchers (BETTER) Capstone. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Shannon

    BETTER Capstone supported 29 student project teams consisting of 155 students over two years in developing transformative building energy efficiency technologies through a capstone design experience. Capstone is the culmination of an undergraduate student’s engineering education. Interdisciplinary teams of students spent a semester designing and prototyping a technological solution for a variety building energy efficiency problems. During this experience students utilized the full design process, including the manufacturing and testing of a prototype solution, as well as publically demonstrating the solution at the Capstone Design Expo. As part of this project, students explored modern manufacturing techniques and gained hands-on experiencemore » with these techniques to produce their prototype technologies. This research added to the understanding of the challenges within building technology education and engagement with industry. One goal of the project was to help break the chicken-and-egg problem with getting students to engage more deeply with the building technology industry. It was learned however that this industry is less interested in trying innovative new concept but rather interested in hiring graduates for existing conventional building efforts. While none of the projects yielded commercial success, much individual student growth and learning was accomplished, which is a long-term benefit to the public at large.« less

  2. Human performance on the traveling salesman problem.

    PubMed

    MacGregor, J N; Ormerod, T

    1996-05-01

    Two experiments on performance on the traveling salesman problem (TSP) are reported. The TSP consists of finding the shortest path through a set of points, returning to the origin. It appears to be an intransigent mathematical problem, and heuristics have been developed to find approximate solutions. The first experiment used 10-point, the second, 20-point problems. The experiments tested the hypothesis that complexity of TSPs is a function of number of nonboundary points, not total number of points. Both experiments supported the hypothesis. The experiments provided information on the quality of subjects' solutions. Their solutions clustered close to the best known solutions, were an order of magnitude better than solutions produced by three well-known heuristics, and on average fell beyond the 99.9th percentile in the distribution of random solutions. The solution process appeared to be perceptually based.

  3. Solutions of large-scale electromagnetics problems involving dielectric objects with the parallel multilevel fast multipole algorithm.

    PubMed

    Ergül, Özgür

    2011-11-01

    Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.

  4. STAGS Example Problems Manual

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Rankin, Charles C.

    2006-01-01

    This document summarizes the STructural Analysis of General Shells (STAGS) development effort, STAGS performance for selected demonstration problems, and STAGS application problems illustrating selected advanced features available in the STAGS Version 5.0. Each problem is discussed including selected background information and reference solutions when available. The modeling and solution approach for each problem is described and illustrated. Numerical results are presented and compared with reference solutions, test data, and/or results obtained from mesh refinement studies. These solutions provide an indication of the overall capabilities of the STAGS nonlinear finite element analysis tool and provide users with representative cases, including input files, to explore these capabilities that may then be tailored to other applications.

  5. Active and Healthy Ageing as a Wicked Problem: The Contribution of a Multidisciplinary Research University.

    PubMed

    Riva, Giuseppe; Graffigna, Guendalina; Baitieri, Maddalena; Amato, Alessandra; Bonanomi, Maria Grazia; Valentini, Paolo; Castelli, Guido

    2014-01-01

    The quest for an active and healthy ageing can be considered a "wicked problem." It is a social and cultural problem, which is difficult to solve because of incomplete, changing, and contradictory requirements. These problems are tough to manage because of their social complexity. They are a group of linked problems embedded in the structure of the communities in which they occur. First, they require the knowledge of the social and cultural context in which they occur. They can be solved only by understanding of what people do and why they do it. Second, they require a multidisciplinary approach. Wicked problems can have different solutions, so it is critical to capture the full range of possibilities and interpretations. Thus, we suggest that Università Cattolica del Sacro Cuore (UCSC) is well suited for accepting and managing this challenge because of its applied research orientation, multidisciplinary approach, and integrated vision. After presenting the research activity of UCSC, we describe a possible "systems thinking" strategy to consider the complexity and interdependence of active ageing and healthy living.

  6. Multiple-Solution Problems in a Statistics Classroom: An Example

    ERIC Educational Resources Information Center

    Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing

    2017-01-01

    The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact…

  7. The thermoelastic Aldo contact model with frictional heating

    NASA Astrophysics Data System (ADS)

    Afferrante, L.; Ciavarella, M.

    2004-03-01

    In the study of the essential features of thermoelastic contact, Comninou and Dundurs (J. Therm. Stresses 3 (1980) 427) devised a simplified model, the so-called "Aldo model", where the full 3 D body is replaced by a large number of thin rods normal to the interface and insulated between each other, and the system was further reduced to 2 rods by Barber's Conjecture (ASME J. Appl. Mech. 48 (1981) 555). They studied in particular the case of heat flux at the interface driven by temperature differences of the bodies, and opposed by a contact resistance, finding possible multiple and history dependent solutions, depending on the imposed temperature differences. The Aldo model is here extended to include the presence of frictional heating. It is found that the number of solutions of the problem is still always odd, and Barber's graphical construction and the stability analysis of the previous case with no frictional heating can be extended. For any given imposed temperature difference, a critical speed is found for which the uniform pressure solution becomes non-unique and/or unstable. For one direction of the temperature difference, the uniform pressure solution is non-unique before it becomes unstable. When multiple solutions occur, outermost solutions (those involving only one rod in contact) are always stable. A full numerical analysis has been performed to explore the transient behaviour of the system, in the case of two rods of different size. In the general case of N rods, Barber's conjecture is shown to hold since there can only be two stable states for all the rods, and the reduction to two rods is always possible, a posteriori.

  8. A Production System Version of the Hearsay-II Speech Understanding System

    DTIC Science & Technology

    1978-04-01

    synchronization mechanisms turn out to be adequate with a rull complement of KSs. Finally, HSP is found to aid solution of the Small Address Problem, as it...much greater than that if HSP’s less powerful synchronization mechanisms turn out to be adequate with a full complement of KSs. Finally, HSP is...earlier version of HSII which had a limited set of KSs. With a richer set of KSs and/or a reduction (or elimination) of synchronization overheads, the

  9. Implicit, nonswitching, vector-oriented algorithm for steady transonic flow

    NASA Technical Reports Server (NTRS)

    Lottati, I.

    1983-01-01

    A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.

  10. JPRS Report. Soviet Union: Political Affairs

    DTIC Science & Technology

    1988-11-18

    administrative system for the city’s fruit and vegetable conveyor, from field to store shelf, in the results of which those who raise, procure and sell the...full severity of the law. The vicious circle in the existing outlay system for the procurement and sales of fruit and vegetables can be broken if the...conduct the battle by banning vodka but not combatting alcohol abuse than it is to find solutions to the problems speci- fied in the national program for

  11. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589

  12. Guaranteed estimation of solutions to Helmholtz transmission problems with uncertain data from their indirect noisy observations

    NASA Astrophysics Data System (ADS)

    Podlipenko, Yu. K.; Shestopalov, Yu. V.

    2017-09-01

    We investigate the guaranteed estimation problem of linear functionals from solutions to transmission problems for the Helmholtz equation with inexact data. The right-hand sides of equations entering the statements of transmission problems and the statistical characteristics of observation errors are supposed to be unknown and belonging to certain sets. It is shown that the optimal linear mean square estimates of the above mentioned functionals and estimation errors are expressed via solutions to the systems of transmission problems of the special type. The results and techniques can be applied in the analysis and estimation of solution to forward and inverse electromagnetic and acoustic problems with uncertain data that arise in mathematical models of the wave diffraction on transparent bodies.

  13. Immediate Truth--Temporal Contiguity between a Cognitive Problem and Its Solution Determines Experienced Veracity of the Solution

    ERIC Educational Resources Information Center

    Topolinski, Sascha; Reber, Rolf

    2010-01-01

    A temporal contiguity hypothesis for the experience of veracity is tested which states that a solution candidate to a cognitive problem is more likely to be experienced as correct the faster it succeeds the problem. Experiment 1 varied the onset time of the appearance of proposed solutions to anagrams (50 ms vs. 150 ms) and found for both correct…

  14. Individualized Math Problems in Simple Equations. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require solution of linear equations, systems…

  15. Implementation of the full viscoresistive magnetohydrodynamic equations in a nonlinear finite element code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haverkort, J.W.; Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven; Blank, H.J. de

    Numerical simulations form an indispensable tool to understand the behavior of a hot plasma that is created inside a tokamak for providing nuclear fusion energy. Various aspects of tokamak plasmas have been successfully studied through the reduced magnetohydrodynamic (MHD) model. The need for more complete modeling through the full MHD equations is addressed here. Our computational method is presented along with measures against possible problems regarding pollution, stability, and regularity. The problem of ensuring continuity of solutions in the center of a polar grid is addressed in the context of a finite element discretization of the full MHD equations. Amore » rigorous and generally applicable solution is proposed here. Useful analytical test cases are devised to verify the correct implementation of the momentum and induction equation, the hyperdiffusive terms, and the accuracy with which highly anisotropic diffusion can be simulated. A striking observation is that highly anisotropic diffusion can be treated with the same order of accuracy as isotropic diffusion, even on non-aligned grids, as long as these grids are generated with sufficient care. This property is shown to be associated with our use of a magnetic vector potential to describe the magnetic field. Several well-known instabilities are simulated to demonstrate the capabilities of the new method. The linear growth rate of an internal kink mode and a tearing mode are benchmarked against the results of a linear MHD code. The evolution of a tearing mode and the resulting magnetic islands are simulated well into the nonlinear regime. The results are compared with predictions from the reduced MHD model. Finally, a simulation of a ballooning mode illustrates the possibility to use our method as an ideal MHD method without the need to add any physical dissipation.« less

  16. Code for Multiblock CFD and Heat-Transfer Computations

    NASA Technical Reports Server (NTRS)

    Fabian, John C.; Heidmann, James D.; Lucci, Barbara L.; Ameri, Ali A.; Rigby, David L.; Steinthorsson, Erlendur

    2006-01-01

    The NASA Glenn Research Center General Multi-Block Navier-Stokes Convective Heat Transfer Code, Glenn-HT, has been used extensively to predict heat transfer and fluid flow for a variety of steady gas turbine engine problems. Recently, the Glenn-HT code has been completely rewritten in Fortran 90/95, a more object-oriented language that allows programmers to create code that is more modular and makes more efficient use of data structures. The new implementation takes full advantage of the capabilities of the Fortran 90/95 programming language. As a result, the Glenn-HT code now provides dynamic memory allocation, modular design, and unsteady flow capability. This allows for the heat-transfer analysis of a full turbine stage. The code has been demonstrated for an unsteady inflow condition, and gridding efforts have been initiated for a full turbine stage unsteady calculation. This analysis will be the first to simultaneously include the effects of rotation, blade interaction, film cooling, and tip clearance with recessed tip on turbine heat transfer and cooling performance. Future plans call for the application of the new Glenn-HT code to a range of gas turbine engine problems of current interest to the heat-transfer community. The new unsteady flow capability will allow researchers to predict the effect of unsteady flow phenomena upon the convective heat transfer of turbine blades and vanes. Work will also continue on the development of conjugate heat-transfer capability in the code, where simultaneous solution of convective and conductive heat-transfer domains is accomplished. Finally, advanced turbulence and fluid flow models and automatic gridding techniques are being developed that will be applied to the Glenn-HT code and solution process.

  17. A Structured Approach to Teaching Applied Problem Solving through Technology Assessment.

    ERIC Educational Resources Information Center

    Fischbach, Fritz A.; Sell, Nancy J.

    1986-01-01

    Describes an approach to problem solving based on real-world problems. Discusses problem analysis and definitions, preparation of briefing documents, solution finding techniques (brainstorming and synectics), solution evaluation and judgment, and implementation. (JM)

  18. Coupled Neutronics Thermal-Hydraulic Solution of a Full-Core PWR Using VERA-CS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarno, Kevin T; Palmtag, Scott; Davidson, Gregory G

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a core simulator called VERA-CS to model operating PWR reactors with high resolution. This paper describes how the development of VERA-CS is being driven by a set of progression benchmark problems that specify the delivery of useful capability in discrete steps. As part of this development, this paper will describe the current capability of VERA-CS to perform a multiphysics simulation of an operating PWR at Hot Full Power (HFP) conditions using a set of existing computer codes coupled together in a novel method. Results for several single-assembly casesmore » are shown that demonstrate coupling for different boron concentrations and power levels. Finally, high-resolution results are shown for a full-core PWR reactor modeled in quarter-symmetry.« less

  19. A gradient system solution to Potts mean field equations and its electronic implementation.

    PubMed

    Urahama, K; Ueno, S

    1993-03-01

    A gradient system solution method is presented for solving Potts mean field equations for combinatorial optimization problems subject to winner-take-all constraints. In the proposed solution method the optimum solution is searched by using gradient descent differential equations whose trajectory is confined within the feasible solution space of optimization problems. This gradient system is proven theoretically to always produce a legal local optimum solution of combinatorial optimization problems. An elementary analog electronic circuit implementing the presented method is designed on the basis of current-mode subthreshold MOS technologies. The core constituent of the circuit is the winner-take-all circuit developed by Lazzaro et al. Correct functioning of the presented circuit is exemplified with simulations of the circuits implementing the scheme for solving the shortest path problems.

  20. An interative solution of an integral equation for radiative transfer by using variational technique

    NASA Technical Reports Server (NTRS)

    Yoshikawa, K. K.

    1973-01-01

    An effective iterative technique is introduced to solve a nonlinear integral equation frequently associated with radiative transfer problems. The problem is formulated in such a way that each step of an iterative sequence requires the solution of a linear integral equation. The advantage of a previously introduced variational technique which utilizes a stepwise constant trial function is exploited to cope with the nonlinear problem. The method is simple and straightforward. Rapid convergence is obtained by employing a linear interpolation of the iterative solutions. Using absorption coefficients of the Milne-Eddington type, which are applicable to some planetary atmospheric radiation problems. Solutions are found in terms of temperature and radiative flux. These solutions are presented numerically and show excellent agreement with other numerical solutions.

  1. A non-local free boundary problem arising in a theory of financial bubbles

    PubMed Central

    Berestycki, Henri; Monneau, Regis; Scheinkman, José A.

    2014-01-01

    We consider an evolution non-local free boundary problem that arises in the modelling of speculative bubbles. The solution of the model is the speculative component in the price of an asset. In the framework of viscosity solutions, we show the existence and uniqueness of the solution. We also show that the solution is convex in space, and establish several monotonicity properties of the solution and of the free boundary with respect to parameters of the problem. To study the free boundary, we use, in particular, the fact that the odd part of the solution solves a more standard obstacle problem. We show that the free boundary is and describe the asymptotics of the free boundary as c, the cost of transacting the asset, goes to zero. PMID:25288815

  2. Modifying PASVART to solve singular nonlinear 2-point boundary problems

    NASA Technical Reports Server (NTRS)

    Fulton, James P.

    1988-01-01

    To study the buckling and post-buckling behavior of shells and various other structures, one must solve a nonlinear 2-point boundary problem. Since closed-form analytic solutions for such problems are virtually nonexistent, numerical approximations are inevitable. This makes the availability of accurate and reliable software indispensable. In a series of papers Lentini and Pereyra, expanding on the work of Keller, developed PASVART: an adaptive finite difference solver for nonlinear 2-point boundary problems. While the program does produce extremely accurate solutions with great efficiency, it is hindered by a major limitation. PASVART will only locate isolated solutions of the problem. In buckling problems, the solution set is not unique. It will contain singular or bifurcation points, where different branches of the solution set may intersect. Thus, PASVART is useless precisely when the problem becomes interesting. To resolve this deficiency we propose a modification of PASVART that will enable the user to perform a more complete bifurcation analysis. PASVART would be combined with the Thurston bifurcation solution: as adaptation of Newton's method that was motivated by the work of Koiter 3 are reinterpreted in terms of an iterative computational method by Thurston.

  3. Eshelby's problem of a spherical inclusion eccentrically embedded in a finite spherical body

    PubMed Central

    He, Q.-C.

    2017-01-01

    Resorting to the superposition principle, the solution of Eshelby's problem of a spherical inclusion located eccentrically inside a finite spherical domain is obtained in two steps: (i) the solution to the problem of a spherical inclusion in an infinite space; (ii) the solution to the auxiliary problem of the corresponding finite spherical domain subjected to appropriate boundary conditions. Moreover, a set of functions called the sectional and harmonic deviators are proposed and developed to work out the auxiliary solution in a series form, including the displacement and Eshelby tensor fields. The analytical solutions are explicitly obtained and illustrated when the geometric and physical parameters and the boundary condition are specified. PMID:28293141

  4. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  5. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  6. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  7. A problem-solving approach to nutrition education with Filipino mothers.

    PubMed

    Ticao, C J; Aboud, F E

    1998-06-01

    The study examined Filipino mothers' problem solving on issues related to child feeding, using a dyadic, peer-help approach. The participants were mothers of children under 6 yr of age from a village in the southern Philippines, where malnutrition among children is prevalent. Mothers were paired with a mutual friend (each nominated the other as a best friend) or a unilateral friend (only one nominated the other as a best friend) to discuss a feeding problem to which they initially gave similar solutions (agreed) and one to which they gave different solutions (disagreed). In the final step, they were asked to give privately the solutions they considered best for the problem. The number and quality of these final-step solutions were analyzed as a function of the friend relation, the level of initial agreement with their friend partner, and the source of the solution. Results indicated that the quantity and quality of solutions increased from before to after the dyadic discussion, especially among mothers paired with a mutual friend with whom they agreed. Most of their final-step solutions came from ones they themselves had generated during the discussion, not ones their friend partner had proposed. There was also evidence that high quality solutions were generated by mothers paired with a disagreeing unilateral friend. Implications for nutrition education concern the benefits of a peer-help, dyadic problem-solving approach, taking into account the role of a friend in facilitating a mother's production of new solutions to child feeding problems. The procedure may be used by health promoters who want to build capacities and self-reliance through collective problem solving.

  8. Graphing as a Problem-Solving Strategy.

    ERIC Educational Resources Information Center

    Cohen, Donald

    1984-01-01

    The focus is on how line graphs can be used to approximate solutions to rate problems and to suggest equations that offer exact algebraic solutions to the problem. Four problems requiring progressively greater graphing sophistication are presented plus four exercises. (MNS)

  9. Use of a Computer Language in Teaching Dynamic Programming. Final Report.

    ERIC Educational Resources Information Center

    Trimble, C. J.; And Others

    Most optimization problems of any degree of complexity must be solved using a computer. In the teaching of dynamic programing courses, it is often desirable to use a computer in problem solution. The solution process involves conceptual formulation and computational Solution. Generalized computer codes for dynamic programing problem solution…

  10. Why PQ?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peccei, R. D.

    2010-08-30

    I discuss how the solution of the U(1){sub A} problem of QCD through the existence of the {theta}-vacuum gave rise to the strong CP problem. After examining various suggested solutions to this problem, I conclude that the only viable solution still is one that involves the existence of a spontaneously broken chiral symmetry: U(1){sub PQ}.

  11. Human Performance on Visually Presented Traveling Salesperson Problems with Varying Numbers of Nodes

    ERIC Educational Resources Information Center

    Dry, Matthew; Lee, Michael D.; Vickers, Douglas; Hughes, Peter

    2006-01-01

    We investigated the properties of the distribution of human solution times for Traveling Salesperson Problems (TSPs) with increasing numbers of nodes. New experimental data are presented that measure solution times for carefully chosen representative problems with 10, 20, . . . 120 nodes. We compared the solution times predicted by the convex hull…

  12. Relationships between Undergraduates' Argumentation Skills, Conceptual Quality of Problem Solutions, and Problem Solving Strategies in Introductory Physics

    ERIC Educational Resources Information Center

    Rebello, Carina M.

    2012-01-01

    This study explored the effects of alternative forms of argumentation on undergraduates' physics solutions in introductory calculus-based physics. A two-phase concurrent mixed methods design was employed to investigate relationships between undergraduates' written argumentation abilities, conceptual quality of problem solutions, as well…

  13. The solution of the sixth Hilbert problem: the ultimate Galilean revolution

    NASA Astrophysics Data System (ADS)

    D'Ariano, Giacomo Mauro

    2018-04-01

    I argue for a full mathematization of the physical theory, including its axioms, which must contain no physical primitives. In provocative words: `physics from no physics'. Although this may seem an oxymoron, it is the royal road to keep complete logical coherence, hence falsifiability of the theory. For such a purely mathematical theory the physical connotation must pertain only the interpretation of the mathematics, ranging from the axioms to the final theorems. On the contrary, the postulates of the two current major physical theories either do not have physical interpretation (as for von Neumann's axioms for quantum theory), or contain physical primitives as `clock', `rigid rod', `force', `inertial mass' (as for special relativity and mechanics). A purely mathematical theory as proposed here, though with limited (but relentlessly growing) domain of applicability, will have the eternal validity of mathematical truth. It will be a theory on which natural sciences can firmly rely. Such kind of theory is what I consider to be the solution of the sixth Hilbert problem. I argue that a prototype example of such a mathematical theory is provided by the novel algorithmic paradigm for physics, as in the recent information-theoretical derivation of quantum theory and free quantum field theory. This article is part of the theme issue `Hilbert's sixth problem'.

  14. Operator for object recognition and scene analysis by estimation of set occupancy with noisy and incomplete data sets

    NASA Astrophysics Data System (ADS)

    Rees, S. J.; Jones, Bryan F.

    1992-11-01

    Once feature extraction has occurred in a processed image, the recognition problem becomes one of defining a set of features which maps sufficiently well onto one of the defined shape/object models to permit a claimed recognition. This process is usually handled by aggregating features until a large enough weighting is obtained to claim membership, or an adequate number of located features are matched to the reference set. A requirement has existed for an operator or measure capable of a more direct assessment of membership/occupancy between feature sets, particularly where the feature sets may be defective representations. Such feature set errors may be caused by noise, by overlapping of objects, and by partial obscuration of features. These problems occur at the point of acquisition: repairing the data would then assume a priori knowledge of the solution. The technique described in this paper offers a set theoretical measure for partial occupancy defined in terms of the set of minimum additions to permit full occupancy and the set of locations of occupancy if such additions are made. As is shown, this technique permits recognition of partial feature sets with quantifiable degrees of uncertainty. A solution to the problems of obscuration and overlapping is therefore available.

  15. The solution of the sixth Hilbert problem: the ultimate Galilean revolution.

    PubMed

    D'Ariano, Giacomo Mauro

    2018-04-28

    I argue for a full mathematization of the physical theory, including its axioms, which must contain no physical primitives. In provocative words: 'physics from no physics'. Although this may seem an oxymoron, it is the royal road to keep complete logical coherence, hence falsifiability of the theory. For such a purely mathematical theory the physical connotation must pertain only the interpretation of the mathematics, ranging from the axioms to the final theorems. On the contrary, the postulates of the two current major physical theories either do not have physical interpretation (as for von Neumann's axioms for quantum theory), or contain physical primitives as 'clock', 'rigid rod', 'force', 'inertial mass' (as for special relativity and mechanics). A purely mathematical theory as proposed here, though with limited (but relentlessly growing) domain of applicability, will have the eternal validity of mathematical truth. It will be a theory on which natural sciences can firmly rely. Such kind of theory is what I consider to be the solution of the sixth Hilbert problem. I argue that a prototype example of such a mathematical theory is provided by the novel algorithmic paradigm for physics, as in the recent information-theoretical derivation of quantum theory and free quantum field theory.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  16. Finite difference solutions of heat conduction problems in multi-layered bodies with complex geometries

    NASA Technical Reports Server (NTRS)

    Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.

    1984-01-01

    A numerical procedure is presented for analyzing a wide variety of heat conduction problems in multilayered bodies having complex geometry. The method is based on a finite difference solution of the heat conduction equation using a body fitted coordinate system transformation. Solution techniques are described for steady and transient problems with and without internal energy generation. Results are found to compare favorably with several well known solutions.

  17. Solution to the mean king's problem with mutually unbiased bases for arbitrary levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Gen; Tanaka, Hajime; Ozawa, Masanao

    2006-05-15

    The mean king's problem with mutually unbiased bases is reconsidered for arbitrary d-level systems. Hayashi et al. [Phys. Rev. A 71, 052331 (2005)] related the problem to the existence of a maximal set of d-1 mutually orthogonal Latin squares, in their restricted setting that allows only measurements of projection-valued measures. However, we then cannot find a solution to the problem when, e.g., d=6 or d=10. In contrast to their result, we show that the king's problem always has a solution for arbitrary levels if we also allow positive operator-valued measures. In constructing the solution, we use orthogonal arrays in combinatorialmore » design theory.« less

  18. Early Design Choices: Capture, Model, Integrate, Analyze, Simulate

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2004-01-01

    I. Designs are constructed incrementally to meet requirements and solve problems: a) Requirements types: objectives, scenarios, constraints, ilities. etc. b) Problem/issue types: risk/safety, cost/difficulty, interaction, conflict, etc. II. Capture requirements, problems and solutions: a) Collect design and analysis products and make them accessible for integration and analysis; b) Link changes in design requirements, problems and solutions; and c) Harvest design data for design models and choice structures. III. System designs are constructed by multiple groups designing interacting subsystems a) Diverse problems, choice criteria, analysis methods and point solutions. IV. Support integration and global analysis of repercussions: a) System implications of point solutions; b) Broad analysis of interactions beyond totals of mass, cost, etc.

  19. H2, fixed architecture, control design for large scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1990-01-01

    The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.

  20. Modelling atmospheric flows with adaptive moving meshes

    NASA Astrophysics Data System (ADS)

    Kühnlein, Christian; Smolarkiewicz, Piotr K.; Dörnbrack, Andreas

    2012-04-01

    An anelastic atmospheric flow solver has been developed that combines semi-implicit non-oscillatory forward-in-time numerics with a solution-adaptive mesh capability. A key feature of the solver is the unification of a mesh adaptation apparatus, based on moving mesh partial differential equations (PDEs), with the rigorous formulation of the governing anelastic PDEs in generalised time-dependent curvilinear coordinates. The solver development includes an enhancement of the flux-form multidimensional positive definite advection transport algorithm (MPDATA) - employed in the integration of the underlying anelastic PDEs - that ensures full compatibility with mass continuity under moving meshes. In addition, to satisfy the geometric conservation law (GCL) tensor identity under general moving meshes, a diagnostic approach is proposed based on the treatment of the GCL as an elliptic problem. The benefits of the solution-adaptive moving mesh technique for the simulation of multiscale atmospheric flows are demonstrated. The developed solver is verified for two idealised flow problems with distinct levels of complexity: passive scalar advection in a prescribed deformational flow, and the life cycle of a large-scale atmospheric baroclinic wave instability showing fine-scale phenomena of fronts and internal gravity waves.

  1. Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Xiao-Chuan; Keyes, David; Yang, Chao

    2014-09-29

    The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementationmore » since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.« less

  2. UAS Integration into the NAS: Unmanned Aircraft System (UAS) Delegation of Separation

    NASA Technical Reports Server (NTRS)

    Fern, Lisa Carolynn; Kenny, Caitlin Ailis

    2012-01-01

    FAA Modernization and Reform Act of 2012 mandates UAS integration in the NAS by 2015. Operators must be able to safely maneuver UAS to maintain separation and collision avoidance. Delegated Separation is defined as the transfer of responsibility for maintaining separation between aircraft or vehicles from the air navigation service provider to the relevant flight operator, and will likely begin in sparsely trafficked areas before moving to more heavily populated airspace. As UAS operate primarily in areas with lower traffic density and perform maneuvers routinely that are currently managed through special handling, they have the advantage of becoming an early adopter of delegated separation. This experiment will examine if UAS are capable of performing delegated separation in 5 nm horizontal and 1000 ft vertical distances under two delegation conditions. In Extended Delegation, ATC are in charge of identifying problems and delegating to pilot identification and implementation of the solution and monitoring. In Full Delegation, the pilots are responsible for all tasks related to separation assurance: identification of problems and solutions, implementation and monitoring.

  3. Nonlinear evolution of the first mode supersonic oblique waves in compressible boundary layers. Part 1: Heated/cooled walls

    NASA Technical Reports Server (NTRS)

    Gajjar, J. S. B.

    1993-01-01

    The nonlinear stability of an oblique mode propagating in a two-dimensional compressible boundary layer is considered under the long wave-length approximation. The growth rate of the wave is assumed to be small so that the concept of unsteady nonlinear critical layers can be used. It is shown that the spatial/temporal evolution of the mode is governed by a pair of coupled unsteady nonlinear equations for the disturbance vorticity and density. Expressions for the linear growth rate show clearly the effects of wall heating and cooling and in particular how heating destabilizes the boundary layer for these long wavelength inviscid modes at O(1) Mach numbers. A generalized expression for the linear growth rate is obtained and is shown to compare very well for a range of frequencies and wave-angles at moderate Mach numbers with full numerical solutions of the linear stability problem. The numerical solution of the nonlinear unsteady critical layer problem using a novel method based on Fourier decomposition and Chebychev collocation is discussed and some results are presented.

  4. Teaching Problem-Solving Skills to Nuclear Engineering Students

    ERIC Educational Resources Information Center

    Waller, E.; Kaye, M. H.

    2012-01-01

    Problem solving is an essential skill for nuclear engineering graduates entering the workforce. Training in qualitative and quantitative aspects of problem solving allows students to conceptualise and execute solutions to complex problems. Solutions to problems in high consequence fields of study such as nuclear engineering require rapid and…

  5. Analytical solutions for sequentially coupled one-dimensional reactive transport problems Part I: Mathematical derivations

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Clement, T. P.

    2008-02-01

    Multi-species reactive transport equations coupled through sorption and sequential first-order reactions are commonly used to model sites contaminated with radioactive wastes, chlorinated solvents and nitrogenous species. Although researchers have been attempting to solve various forms of these reactive transport equations for over 50 years, a general closed-form analytical solution to this problem is not available in the published literature. In Part I of this two-part article, we derive a closed-form analytical solution to this problem for spatially-varying initial conditions. The proposed solution procedure employs a combination of Laplace and linear transform methods to uncouple and solve the system of partial differential equations. Two distinct solutions are derived for Dirichlet and Cauchy boundary conditions each with Bateman-type source terms. We organize and present the final solutions in a common format that represents the solutions to both boundary conditions. In addition, we provide the mathematical concepts for deriving the solution within a generic framework that can be used for solving similar transport problems.

  6. Transformational and derivational strategies in analogical problem solving.

    PubMed

    Schelhorn, Sven-Eric; Griego, Jacqueline; Schmid, Ute

    2007-03-01

    Analogical problem solving is mostly described as transfer of a source solution to a target problem based on the structural correspondences (mapping) between source and target. Derivational analogy (Carbonell, Machine learning: an artificial intelligence approach Los Altos. Morgan Kaufmann, 1986) proposes an alternative view: a target problem is solved by replaying a remembered problem-solving episode. Thus, the experience with the source problem is used to guide the search for the target solution by applying the same solution technique rather than by transferring the complete solution. We report an empirical study using the path finding problems presented in Novick and Hmelo (J Exp Psychol Learn Mem Cogn 20:1296-1321, 1994) as material. We show that both transformational and derivational analogy are problem-solving strategies realized by human problem solvers. Which strategy is evoked in a given problem-solving context depends on the constraints guiding object-to-object mapping between source and target problem. Specifically, if constraints facilitating mapping are available, subjects are more likely to employ a transformational strategy, otherwise they are more likely to use a derivational strategy.

  7. A Case Study of Controlling Crossover in a Selection Hyper-heuristic Framework Using the Multidimensional Knapsack Problem.

    PubMed

    Drake, John H; Özcan, Ender; Burke, Edmund K

    2016-01-01

    Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain.

  8. A Genetic Algorithm Approach for the TV Self-Promotion Assignment Problem

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo A.; Fontes, Fernando A. C. C.; Fontes, Dalila B. M. M.

    2009-09-01

    We report on the development of a Genetic Algorithm (GA), which has been integrated into a Decision Support System to plan the best assignment of the weekly self-promotion space for a TV station. The problem addressed consists on deciding which shows to advertise and when such that the number of viewers, of an intended group or target, is maximized. The GA proposed incorporates a greedy heuristic to find good initial solutions. These solutions, as well as the solutions later obtained through the use of the GA, go then through a repair procedure. This is used with two objectives, which are addressed in turn. Firstly, it checks the solution feasibility and if unfeasible it is fixed by removing some shows. Secondly, it tries to improve the solution by adding some extra shows. Since the problem faced by the commercial TV station is too big and has too many features it cannot be solved exactly. Therefore, in order to test the quality of the solutions provided by the proposed GA we have randomly generated some smaller problem instances. For these problems we have obtained solutions on average within 1% of the optimal solution value.

  9. Solution to the Problems of the Sustainable Development Management

    NASA Astrophysics Data System (ADS)

    Rusko, Miroslav; Procházková, Dana

    2011-01-01

    The paper shows that environment is one of the basic public assets of a human system, and it must be therefore specially protected. According to our present knowledge, the sustainability is necessary for all human systems and it is necessary to invoke the sustainable development principles in all human system assets. Sustainable development is understood as a development that does not erode ecological, social or politic systems on which it depends, but it explicitly approves ecological limitation under the economic activity frame and it has full comprehension for support of human needs. The paper summarises the conditions for sustainable development, tools, methods and techniques to solve the environmental problems and the tasks of executive governance in the environmental segment.

  10. On a full Bayesian inference for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  11. The point-spread function measure of resolution for the 3-D electrical resistivity experiment

    NASA Astrophysics Data System (ADS)

    Oldenborger, Greg A.; Routh, Partha S.

    2009-02-01

    The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.

  12. Can chaos be observed in quantum gravity?

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca; Höhn, Philipp A.; Koslowski, Tim A.; Nelson, Mike I.

    2017-06-01

    Full general relativity is almost certainly 'chaotic'. We argue that this entails a notion of non-integrability: a generic general relativistic model, at least when coupled to cosmologically interesting matter, likely possesses neither differentiable Dirac observables nor a reduced phase space. It follows that the standard notion of observable has to be extended to include non-differentiable or even discontinuous generalized observables. These cannot carry Poisson-algebraic structures and do not admit a standard quantization; one thus faces a quantum representation problem of gravitational observables. This has deep consequences for a quantum theory of gravity, which we investigate in a simple model for a system with Hamiltonian constraint that fails to be completely integrable. We show that basing the quantization on standard topology precludes a semiclassical limit and can even prohibit any solutions to the quantum constraints. Our proposed solution to this problem is to refine topology such that a complete set of Dirac observables becomes continuous. In the toy model, it turns out that a refinement to a polymer-type topology, as e.g. used in loop gravity, is sufficient. Basing quantization of the toy model on this finer topology, we find a complete set of quantum Dirac observables and a suitable semiclassical limit. This strategy is applicable to realistic candidate theories of quantum gravity and thereby suggests a solution to a long-standing problem which implies ramifications for the very concept of quantization. Our work reveals a qualitatively novel facet of chaos in physics and opens up a new avenue of research on chaos in gravity which hints at deep insights into the structure of quantum gravity.

  13. A game theoretic approach to a finite-time disturbance attenuation problem

    NASA Technical Reports Server (NTRS)

    Rhee, Ihnseok; Speyer, Jason L.

    1991-01-01

    A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.

  14. Old wine in new bottles: decanting systemic family process research in the era of evidence-based practice.

    PubMed

    Rohrbaugh, Michael J

    2014-09-01

    Social cybernetic (systemic) ideas from the early Family Process era, though emanating from qualitative clinical observation, have underappreciated heuristic potential for guiding quantitative empirical research on problem maintenance and change. The old conceptual wines we have attempted to repackage in new, science-friendly bottles include ironic processes (when "solutions" maintain problems), symptom-system fit (when problems stabilize relationships), and communal coping (when we-ness helps people change). Both self-report and observational quantitative methods have been useful in tracking these phenomena, and together the three constructs inform a team-based family consultation approach to working with difficult health and behavior problems. In addition, a large-scale, quantitatively focused effectiveness trial of family therapy for adolescent drug abuse highlights the importance of treatment fidelity and qualitative approaches to examining it. In this sense, echoing the history of family therapy research, our experience with juxtaposing quantitative and qualitative methods has gone full circle-from qualitative to quantitative observation and back again. © 2014 FPI, Inc.

  15. Effective dimensional reduction algorithm for eigenvalue problems for thin elastic structures: A paradigm in three dimensions

    PubMed Central

    Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.

    2000-01-01

    We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469

  16. One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1991-01-01

    The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.

  17. A fast numerical method for the valuation of American lookback put options

    NASA Astrophysics Data System (ADS)

    Song, Haiming; Zhang, Qi; Zhang, Ran

    2015-10-01

    A fast and efficient numerical method is proposed and analyzed for the valuation of American lookback options. American lookback option pricing problem is essentially a two-dimensional unbounded nonlinear parabolic problem. We reformulate it into a two-dimensional parabolic linear complementary problem (LCP) on an unbounded domain. The numeraire transformation and domain truncation technique are employed to convert the two-dimensional unbounded LCP into a one-dimensional bounded one. Furthermore, the variational inequality (VI) form corresponding to the one-dimensional bounded LCP is obtained skillfully by some discussions. The resulting bounded VI is discretized by a finite element method. Meanwhile, the stability of the semi-discrete solution and the symmetric positive definiteness of the full-discrete matrix are established for the bounded VI. The discretized VI related to options is solved by a projection and contraction method. Numerical experiments are conducted to test the performance of the proposed method.

  18. Modernizing engine displays

    NASA Technical Reports Server (NTRS)

    Schneider, E. T.; Enevoldson, E. K.

    1984-01-01

    The introduction of electronic fuel control to modern turbine engines has a number of advantages, which are related to an increase in engine performance and to a reduction or elimination of the problems associated with high angle of attack engine operation from the surface to 50,000 feet. If the appropriate engine display devices are available to the pilot, the fuel control system can provide a great amount of information. Some of the wealth of information available from modern fuel controls are discussed in this paper. The considered electronic engine control systems in their most recent forms are known as the Full Authority Digital Engine Control (FADEC) and the Digital Electronic Engine Control (DEEC). Attention is given to some details regarding the control systems, typical engine problems, the solution of problems with the aid of displays, engine displays in normal operation, an example display format, a multipage format, flight strategies, and hardware considerations.

  19. Probabilistic sharing solves the problem of costly punishment

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojie; Szolnoki, Attila; Perc, Matjaž

    2014-08-01

    Cooperators that refuse to participate in sanctioning defectors create the second-order free-rider problem. Such cooperators will not be punished because they contribute to the public good, but they also eschew the costs associated with punishing defectors. Altruistic punishers—those that cooperate and punish—are at a disadvantage, and it is puzzling how such behaviour has evolved. We show that sharing the responsibility to sanction defectors rather than relying on certain individuals to do so permanently can solve the problem of costly punishment. Inspired by the fact that humans have strong but also emotional tendencies for fair play, we consider probabilistic sanctioning as the simplest way of distributing the duty. In well-mixed populations the public goods game is transformed into a coordination game with full cooperation and defection as the two stable equilibria, while in structured populations pattern formation supports additional counterintuitive solutions that are reminiscent of Parrondo's paradox.

  20. Managing wilderness recreation use: common problems and potential solutions

    Treesearch

    David N. Cole; Margaret E. Petersen; Robert C. Lucas

    1987-01-01

    Describes pros and cons of potential solutions to common wilderness recreation problems. Covers the purpose of each potential solution, costs to visitors and management, effectiveness, other considerations, and sources of additional information.

  1. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  2. Managing Small Spacecraft Projects: Less is Not Easier

    NASA Technical Reports Server (NTRS)

    Barley, Bryan; Newhouse, Marilyn

    2012-01-01

    Managing small, low cost missions (class C or D) is not necessarily easier than managing a full flagship mission. Yet, small missions are typically considered easier to manage and used as a training ground for developing the next generation of project managers. While limited resources can be a problem for small missions, in reality most of the issues inherent in managing small projects are not the direct result of limited resources. Instead, problems encountered by managers of small spacecraft missions often derive from 1) the perception that managing small projects is easier if something is easier it needs less rigor and formality in execution, 2) the perception that limited resources necessitate or validate omitting standard management practices, 3) less stringent or unclear guidelines or policies for small projects, and 4) stakeholder expectations that are not consistent with the size and nature of the project. For example, the size of a project is sometimes used to justify not building a full, detailed integrated master schedule. However, while a small schedule slip may not be a problem for a large mission, it can indicate a serious problem for a small mission with a short development phase, highlighting the importance of the schedule for early identification of potential issues. Likewise, stakeholders may accept a higher risk posture early in the definition of a low-cost mission, but as launch approaches this acceptance may change. This presentation discusses these common misconceptions about managing small, low cost missions, the problems that can result, and possible solutions.

  3. A History of Aerospace Problems, Their Solutions, Their Lessons

    NASA Technical Reports Server (NTRS)

    Ryan, R. S.

    1996-01-01

    The positive aspect of problem occurrences is the opportunity for learning and a challenge for innovation. The learning aspect is not restricted to the solution period of the problem occurrence, but can become the beacon for problem prevention on future programs. Problems/failures serve as a point of departure for scaling to new designs. To ensure that problems/failures and their solutions guide the future programs, a concerted effort has been expended to study these problems, their solutions, their derived lessons learned, and projections for future programs. This includes identification of technology thrusts, process changes, codes development, etc. However, they must not become an excuse for adding layers upon layers of standards, criteria, and requirements, but must serve as guidelines that assist instead of stifling engineers. This report is an extension of prior efforts to accomplish this task. Although these efforts only scratch the surface, it is a beginning that others must complete.

  4. The effects of monitoring environment on problem-solving performance.

    PubMed

    Laird, Brian K; Bailey, Charles D; Hester, Kim

    2018-01-01

    While effective and efficient solving of everyday problems is important in business domains, little is known about the effects of workplace monitoring on problem-solving performance. In a laboratory experiment, we explored the monitoring environment's effects on an individual's propensity to (1) establish pattern solutions to problems, (2) recognize when pattern solutions are no longer efficient, and (3) solve complex problems. Under three work monitoring regimes-no monitoring, human monitoring, and electronic monitoring-114 participants solved puzzles for monetary rewards. Based on research related to worker autonomy and theory of social facilitation, we hypothesized that monitored (versus non-monitored) participants would (1) have more difficulty finding a pattern solution, (2) more often fail to recognize when the pattern solution is no longer efficient, and (3) solve fewer complex problems. Our results support the first two hypotheses, but in complex problem solving, an interaction was found between self-assessed ability and the monitoring environment.

  5. Automatic Annotation Method on Learners' Opinions in Case Method Discussion

    ERIC Educational Resources Information Center

    Samejima, Masaki; Hisakane, Daichi; Komoda, Norihisa

    2015-01-01

    Purpose: The purpose of this paper is to annotate an attribute of a problem, a solution or no annotation on learners' opinions automatically for supporting the learners' discussion without a facilitator. The case method aims at discussing problems and solutions in a target case. However, the learners miss discussing some of problems and solutions.…

  6. Closed solutions to a differential-difference equation and an associated plate solidification problem.

    PubMed

    Layeni, Olawanle P; Akinola, Adegbola P; Johnson, Jesse V

    2016-01-01

    Two distinct and novel formalisms for deriving exact closed solutions of a class of variable-coefficient differential-difference equations arising from a plate solidification problem are introduced. Thereupon, exact closed traveling wave and similarity solutions to the plate solidification problem are obtained for some special cases of time-varying plate surface temperature.

  7. Reflection on Solutions in the Form of Refutation Texts versus Problem Solving: The Case of 8th Graders Studying Simple Electric Circuits

    ERIC Educational Resources Information Center

    Safadi, Rafi; Safadi, Ekhlass; Meidav, Meir

    2017-01-01

    This study compared students' learning in troubleshooting and problem solving activities. The troubleshooting activities provided students with solutions to conceptual problems in the form of refutation texts; namely, solutions that portray common misconceptions, refute them, and then present the accepted scientific ideas. They required students…

  8. An Efficient Algorithm for Partitioning and Authenticating Problem-Solutions of eLeaming Contents

    ERIC Educational Resources Information Center

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2013-01-01

    Content authenticity and correctness is one of the important challenges in eLearning as there can be many solutions to one specific problem in cyber space. Therefore, the authors feel it is necessary to map problems to solutions using graph partition and weighted bipartite matching. This article proposes an efficient algorithm to partition…

  9. Closing the gap: connecting sudden representational change to the subjective Aha! experience in insightful problem solving.

    PubMed

    Danek, Amory H; Williams, Joshua; Wiley, Jennifer

    2018-01-18

    Two hallmarks of insightful problem solving are thought to be suddenness in the emergence of solution due to changes in problem representation, and the subjective Aha! Although a number of studies have explored the Aha! experience, few studies have attempted to measure representational change. Following the lead of Durso et al. (Psychol Sci 5(2):94-97, 1994) and Cushen and Wiley (Conscious Cognit 21(3):1166-1175, 2012), in this study, participants made importance-to-solution ratings throughout their solution attempts as a way to assess representational change. Participants viewed a set of magic trick videos with the task of finding out how each trick worked, and rated six action verbs for each trick (including one that implied the correct solution) multiple times during solution. They were also asked to indicate the extent to which they experienced an Aha! moment. Patterns of ratings that showed a sudden change towards a correct solution led to stronger Aha! experiences than patterns that showed a more incremental change towards a correct solution, or a change towards incorrect solutions. The results show a connection between sudden changes in problem representations (leading to correct solutions) and the subjective appraisal of solutions as an Aha! This offers the first empirical support for a close relationship between two theoretical constructs that have traditionally been assumed to be related to insightful problem solving.

  10. KIPSE1: A Knowledge-based Interactive Problem Solving Environment for data estimation and pattern classification

    NASA Technical Reports Server (NTRS)

    Han, Chia Yung; Wan, Liqun; Wee, William G.

    1990-01-01

    A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.

  11. Construction Method of Analytical Solutions to the Mathematical Physics Boundary Problems for Non-Canonical Domains

    NASA Astrophysics Data System (ADS)

    Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.

    2015-06-01

    The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.

  12. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  13. Lattice gas methods for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1995-01-01

    This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

  14. A genetic algorithm solution to the unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.

    1996-02-01

    This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 unitsmore » and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.« less

  15. On Computing Breakpoint Distances for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Moret, Bernard M E

    2017-06-01

    A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.

  16. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  17. The more the merrier? Increasing group size may be detrimental to decision-making performance in nominal groups.

    PubMed

    Amir, Ofra; Amir, Dor; Shahar, Yuval; Hart, Yuval; Gal, Kobi

    2018-01-01

    Demonstrability-the extent to which group members can recognize a correct solution to a problem-has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors-the difficulty of solving a problem and the difficulty of verifying the correctness of a solution-on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually decrease group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.

  18. Aiding the search: Examining individual differences in multiply-constrained problem solving.

    PubMed

    Ellis, Derek M; Brewer, Gene A

    2018-07-01

    Understanding and resolving complex problems is of vital importance in daily life. Problems can be defined by the limitations they place on the problem solver. Multiply-constrained problems are traditionally examined with the compound remote associates task (CRAT). Performance on the CRAT is partially dependent on an individual's working memory capacity (WMC). These findings suggest that executive processes are critical for problem solving and that there are reliable individual differences in multiply-constrained problem solving abilities. The goals of the current study are to replicate and further elucidate the relation between WMC and CRAT performance. To achieve these goals, we manipulated preexposure to CRAT solutions and measured WMC with complex-span tasks. In Experiment 1, we report evidence that preexposure to CRAT solutions improved problem solving accuracy, WMC was correlated with problem solving accuracy, and that WMC did not moderate the effect of preexposure on problem solving accuracy. In Experiment 2, we preexposed participants to correct and incorrect solutions. We replicated Experiment 1 and found that WMC moderates the effect of exposure to CRAT solutions such that high WMC participants benefit more from preexposure to correct solutions than low WMC (although low WMC participants have preexposure benefits as well). Broadly, these results are consistent with theories of working memory and problem solving that suggest a mediating role of attention control processes. Published by Elsevier Inc.

  19. People efficiently explore the solution space of the computationally intractable traveling salesman problem to find near-optimal tours.

    PubMed

    Acuña, Daniel E; Parada, Víctor

    2010-07-29

    Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution ("good" edges) were significantly more likely to stay than other edges ("bad" edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants "ran out of ideas." In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics.

  20. People Efficiently Explore the Solution Space of the Computationally Intractable Traveling Salesman Problem to Find Near-Optimal Tours

    PubMed Central

    Acuña, Daniel E.; Parada, Víctor

    2010-01-01

    Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution (“good” edges) were significantly more likely to stay than other edges (“bad” edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants “ran out of ideas.” In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics. PMID:20686597

  1. Relative motion of orbiting satellites

    NASA Technical Reports Server (NTRS)

    Eades, J. B., Jr.

    1972-01-01

    The relative motion problem is analyzed, as a linearized case, and as a numerically determined solution to provide a time history of the geometries representing the motion state. The displacement history and the hodographs for families of solutions are provided, analytically and graphically, to serve as an aid to understanding this problem area. Linearized solutions to relative motion problems of orbiting particles are presented for the impulsive and fixed thrust cases. Second order solutions are described to enhance the accuracy of prediction. A method was developed to obtain accurate, numerical solutions to the intercept and rendezvous problem; and, special situations are examined. A particular problem related to relative motions, where the motion traces develop a cusp, is examined in detail. This phenomenon is found to be dependent on a particular relationship between orbital eccentricity and the inclination between orbital planes. These conditions are determined, and, example situations are presented and discussed.

  2. Solving optimization problems by the public goods game

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto

    2017-09-01

    We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.

  3. Flowing partially penetrating well: solution to a mixed-type boundary value problem

    NASA Astrophysics Data System (ADS)

    Cassiani, G.; Kabala, Z. J.; Medina, M. A.

    A new semi-analytic solution to the mixed-type boundary value problem for a flowing partially penetrating well with infinitesimal skin situated in an anisotropic aquifer is developed. The solution is suited to aquifers having a semi-infinite vertical extent or to packer tests with aquifer horizontal boundaries far enough from the tested area. The problem reduces to a system of dual integral equations (DE) and further to a deconvolution problem. Unlike the analogous Dagan's steady-state solution [Water Resour. Res. 1978; 14:929-34], our DE solution does not suffer from numerical oscillations. The new solution is validated by matching the corresponding finite-difference solution and is computationally much more efficient. An automated (Newton-Raphson) parameter identification algorithm is proposed for field test inversion, utilizing the DE solution for the forward model. The procedure is computationally efficient and converges to correct parameter values. A solution for the partially penetrating flowing well with no skin and a drawdown-drawdown discontinuous boundary condition, analogous to that by Novakowski [Can. Geotech. J. 1993; 30:600-6], is compared to the DE solution. The D-D solution leads to physically inconsistent infinite total flow rate to the well, when no skin effect is considered. The DE solution, on the other hand, produces accurate results.

  4. POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Ştefănescu, R.; Sandu, A.; Navon, I. M.

    2015-08-01

    This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush-Kuhn-Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov-Galerkin projections. For the first time POD, tensorial POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov-Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.

  5. An example of branching in a variational problem. [shape of liquid suspended from wire in zero gravity

    NASA Technical Reports Server (NTRS)

    Darbro, W.

    1978-01-01

    In an experiment in space it was found that when a cubical frame was slowly withdrawn from a soap solution, the wire frame retained practically a full cube of liquid. Removed from the frame (by shaking), the faces of the cube became progressively more concave, until adjacent faces became tangential. In the present paper a mathematical model describing the shape a liquid takes due to its surface tension while suspended on a wire frame in zero-g is solved by use of Lagrange multipliers. It is shown how the configuration of soap films so bounded is dependent upon the volume of liquid trapped in the films. A special case of the solution is a soap film naturally formed on a cubical wire frame.

  6. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  7. Fast Optimization for Aircraft Descent and Approach Trajectory

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John

    2017-01-01

    We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.

  8. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  9. What students learn when studying physics practice exam problems

    NASA Astrophysics Data System (ADS)

    Fakcharoenphol, Witat; Potter, Eric; Stelzer, Timothy

    2011-06-01

    We developed a web-based tool to provide students with access to old exam problems and solutions. By controlling the order in which students saw the problems, as well as their access to solutions, we obtained data about student learning by studying old exam problems. Our data suggest that in general students learn from doing old exam problems, and that having access to the problem solutions increases their learning. However, the data also suggest the depth of learning may be relatively shallow. In addition, the data show that doing old exam problems provides important formative assessment about the student’s overall preparedness for the exam and their particular areas of strength and weakness.

  10. Medical benefits from the NASA biomedical applications program

    NASA Technical Reports Server (NTRS)

    Sigmon, J. L.

    1974-01-01

    To achieve its goals the NASA Biomedical Applications Program performs four basic tasks: (1) identification of major medical problems which lend themselves to solution by relevant aerospace technology; (2) identification of relevant aerospace technology which can be applied to those problems; (3) application of that technology to demonstrate the feasibility as real solutions to the identified problems; and, (4) motivation of the industrial community to manufacture and market the identified solution to maximize the utilization of aerospace solutions to the biomedical community.

  11. Technological Solutions for Older People with Alzheimer's Disease: Review.

    PubMed

    Maresova, Petra; Tomsone, Signe; Lameski, Petre; Madureira, Joana; Mendes, Ana; Zdravevski, Eftim; Chorbev, Ivan; Trajkovik, Vladimir; Ellen, Moriah; Rodile, Kasper

    2018-04-27

    In the nineties, numerous studies began to highlight the problem of the increasing number of people with Alzheimer's disease in developed countries, especially in the context of demographic progress. At the same time, the 21st century is typical of the development of advanced technologies that penetrate all areas of human life. Digital devices, sensors, and intelligent applications are tools that can help seniors and allow better communication and control of their caregivers. The aim of the paper is to provide an up-to-date summary of the use of technological solutions for improving health and safety for people with Alzheimer's disease. Firstly, the problems and needs of senior citizens with Alzheimer's disease (AD) and their caregivers are specified. Secondly, a scoping review is performed regarding the technological solutions suggested to assist this specific group of patients. Works obtained from the following libraries used in this scoping review: Web of Science, PubMed, Springer, ACM and IEEE Xplore. Four independent reviewers screened the identified records and selected relevant articles which were published in the period from 2007 to 2018. A total of 6,705 publications were selected. In all, 128 full papers were screened. Results obtained from the relevant studies were furthermore divided into the following categories according to the type and use of technologies: devices, processing, and activity recognition. The leading technological solution in the category of devices are wearables and ambient non-invasive sensors. The introduction and utilization of these technologies however brings about challenges in acceptability, durability, ease of use, communication, and power requirements. Furthermore, in needs to be pointed out that these technological solutions should be based on open standards. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. From ``wiggly structures'' to ``unshaky towers'': problem framing, solution finding, and negotiation of courses of actions during a civil engineering unit for elementary students

    NASA Astrophysics Data System (ADS)

    Roth, Wolff-Michael

    1995-12-01

    The present study was designed to investigate problem- and solution-related activity of elementary students in ill-defined and open-ended settings. One Grade 4/5 class of 28 students engaged in the activities of the “Engineering for Children: Structures” curriculum, designed as a vehicle for introducing science concepts, providing ill-defined problem solving contexts, and fostering positive attitudes towards science and technology. Data included video recordings, ethnographic field notes, student produced artefacts (projects and engineering logbooks), and interviews with teachers and observers. These data supported the notion of problems, solutions, and courses of actions as entities with flexible ontologies. In the course of their negotiations, students demonstrated an uncanny competence to frame and reframe problems and solutions and to decide courses of actions of different complexities in spite of the ambiguous nature of (arte)facts, plans, and language. A case study approach was chosen as the literary device to report these general findings. The discussion focuses on the inevitably ambiguous nature of (arte)facts, plans, and language and the associated notion of “interpretive flexibility.” Suggestions are provided for teachers on how to deal with interpretive flexibility without seeking recourse to the didactic approaches of direct teaching. But what happens when problems and solutions are negotiable, when there are no longer isolated problems which one tries to solve but problems which maintain complex linkages with ensembles of other problems and diverse constraints, or when problems and solutions are simultaneously invented? (Lestel, 1989, p. 692, my translation)

  13. Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning

    ERIC Educational Resources Information Center

    Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan

    2009-01-01

    In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…

  14. A Scaffolding Framework to Support the Construction of Evidence-Based Arguments among Middle School Students

    ERIC Educational Resources Information Center

    Belland, Brian R.; Glazewski, Krista D.; Richardson, Jennifer C.

    2008-01-01

    Problem-based learning (PBL) is an instructional approach in which students in small groups engage in an authentic, ill-structured problem, and must (1) define, generate and pursue learning issues to understand the problem, (2) develop a possible solution, (3) provide evidence to support their solution, and (4) present their solution and the…

  15. Application of the perturbation iteration method to boundary layer type problems.

    PubMed

    Pakdemirli, Mehmet

    2016-01-01

    The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.

  16. Classical Electrodynamics: Problems with solutions; Problems with solutions

    NASA Astrophysics Data System (ADS)

    Likharev, Konstantin K.

    2018-06-01

    l Advanced Physics is a series comprising four parts: Classical Mechanics, Classical Electrodynamics, Quantum Mechanics and Statistical Mechanics. Each part consists of two volumes, Lecture notes and Problems with solutions, further supplemented by an additional collection of test problems and solutions available to qualifying university instructors. This volume, Classical Electrodynamics: Lecture notes is intended to be the basis for a two-semester graduate-level course on electricity and magnetism, including not only the interaction and dynamics charged point particles, but also properties of dielectric, conducting, and magnetic media. The course also covers special relativity, including its kinematics and particle-dynamics aspects, and electromagnetic radiation by relativistic particles.

  17. Multiobjective evolutionary optimization of water distribution systems: Exploiting diversity with infeasible solutions.

    PubMed

    Tanyimboh, Tiku T; Seyoum, Alemtsehay G

    2016-12-01

    This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Triangular dislocation: an analytical, artefact-free solution

    NASA Astrophysics Data System (ADS)

    Nikkhoo, Mehdi; Walter, Thomas R.

    2015-05-01

    Displacements and stress-field changes associated with earthquakes, volcanoes, landslides and human activity are often simulated using numerical models in an attempt to understand the underlying processes and their governing physics. The application of elastic dislocation theory to these problems, however, may be biased because of numerical instabilities in the calculations. Here, we present a new method that is free of artefact singularities and numerical instabilities in analytical solutions for triangular dislocations (TDs) in both full-space and half-space. We apply the method to both the displacement and the stress fields. The entire 3-D Euclidean space {R}3 is divided into two complementary subspaces, in the sense that in each one, a particular analytical formulation fulfils the requirements for the ideal, artefact-free solution for a TD. The primary advantage of the presented method is that the development of our solutions involves neither numerical approximations nor series expansion methods. As a result, the final outputs are independent of the scale of the input parameters, including the size and position of the dislocation as well as its corresponding slip vector components. Our solutions are therefore well suited for application at various scales in geoscience, physics and engineering. We validate the solutions through comparison to other well-known analytical methods and provide the MATLAB codes.

  19. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  20. Performance of Extended Local Clustering Organization (LCO) for Large Scale Job-Shop Scheduling Problem (JSP)

    NASA Astrophysics Data System (ADS)

    Konno, Yohko; Suzuki, Keiji

    This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.

  1. Problems of Indian Children.

    ERIC Educational Resources Information Center

    Linton, Marigold

    Previous approaches to the learning problems of American Indian children are viewed as inadequate. An alternative is suggested which emphasizes the problem solution strategies which these children bring to the school situation. Solutions were analyzed in terms of: (1) their probability; (2) their efficiency at permitting a present problem to be…

  2. Problems Relating Mathematics and Science in the High School.

    ERIC Educational Resources Information Center

    Morrow, Richard; Beard, Earl

    This document contains various science problems which require a mathematical solution. The problems are arranged under two general areas. The first (algebra I) contains biology, chemistry, and physics problems which require solutions related to linear equations, exponentials, and nonlinear equations. The second (algebra II) contains physics…

  3. Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Sahraei, S.

    2016-12-01

    Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.

  4. Numerical Problems and Agent-Based Models for a Mass Transfer Course

    ERIC Educational Resources Information Center

    Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

    2009-01-01

    Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

  5. Problem Solving: How Can We Help Students Overcome Cognitive Difficulties

    ERIC Educational Resources Information Center

    Cardellini, Liberato

    2014-01-01

    The traditional approach to teach problem solving usually consists in showing students the solutions of some example-problems and then in asking students to practice individually on solving a certain number of related problems. This approach does not ensure that students learn to solve problems and above all to think about the solution process in…

  6. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  7. LDRD Final Report: Global Optimization for Engineering Science Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.

    1999-12-01

    For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.

  8. Analysis of a class of boundary value problems depending on left and right Caputo fractional derivatives

    NASA Astrophysics Data System (ADS)

    Antunes, Pedro R. S.; Ferreira, Rui A. C.

    2017-07-01

    In this work we study boundary value problems associated to a nonlinear fractional ordinary differential equation involving left and right Caputo derivatives. We discuss the regularity of the solutions of such problems and, in particular, give precise necessary conditions so that the solutions are C1([0, 1]). Taking into account our analytical results, we address the numerical solution of those problems by the augmented -RBF method. Several examples illustrate the good performance of the numerical method.

  9. Active Solution Space and Search on Job-shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo

    In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.

  10. Multiple-solution problems in a statistics classroom: an example

    NASA Astrophysics Data System (ADS)

    Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing

    2017-11-01

    The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact probability mass distribution for the sum of face values. Four different ways of solving the problem are discussed. The solutions span various basic concepts in different mathematical disciplines (sample space in probability theory, the probability generating function in statistics, integer partition in basic combinatorics and individual risk model in actuarial science) and thus promotes upper undergraduate students' awareness of knowledge connections between their courses. All solutions of the example are implemented using the R statistical software package.

  11. New analytical solutions to the two-phase water faucet problem

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-06-17

    Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less

  12. Solutions of the benchmark problems by the dispersion-relation-preserving scheme

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Shen, H.; Kurbatskii, K. A.; Auriault, L.

    1995-01-01

    The 7-point stencil Dispersion-Relation-Preserving scheme of Tam and Webb is used to solve all the six categories of the CAA benchmark problems. The purpose is to show that the scheme is capable of solving linear, as well as nonlinear aeroacoustics problems accurately. Nonlinearities, inevitably, lead to the generation of spurious short wave length numerical waves. Often, these spurious waves would overwhelm the entire numerical solution. In this work, the spurious waves are removed by the addition of artificial selective damping terms to the discretized equations. Category 3 problems are for testing radiation and outflow boundary conditions. In solving these problems, the radiation and outflow boundary conditions of Tam and Webb are used. These conditions are derived from the asymptotic solutions of the linearized Euler equations. Category 4 problems involved solid walls. Here, the wall boundary conditions for high-order schemes of Tam and Dong are employed. These conditions require the use of one ghost value per boundary point per physical boundary condition. In the second problem of this category, the governing equations, when written in cylindrical coordinates, are singular along the axis of the radial coordinate. The proper boundary conditions at the axis are derived by applying the limiting process of r approaches 0 to the governing equations. The Category 5 problem deals with the numerical noise issue. In the present approach, the time-independent mean flow solution is computed first. Once the residual drops to the machine noise level, the incident sound wave is turned on gradually. The solution is marched in time until a time-periodic state is reached. No exact solution is known for the Category 6 problem. Because of this, the problem is formulated in two totally different ways, first as a scattering problem then as a direct simulation problem. There is good agreement between the two numerical solutions. This offers confidence in the computed results. Both formulations are solved as initial value problems. As such, no Kutta condition is required at the trailing edge of the airfoil.

  13. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, T.E.; Franke, O.L.; Bennett, G.D.

    1984-01-01

    The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)

  14. Ranked solutions to a class of combinatorial optimizations—with applications in mass spectrometry based peptide sequencing and a variant of directed paths in random media

    NASA Astrophysics Data System (ADS)

    Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo

    2005-08-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.

  15. The use of MACSYMA for solving elliptic boundary value problems

    NASA Technical Reports Server (NTRS)

    Thejll, Peter; Gilbert, Robert P.

    1990-01-01

    A boundary method is presented for the solution of elliptic boundary value problems. An approach based on the use of complete systems of solutions is emphasized. The discussion is limited to the Dirichlet problem, even though the present method can possibly be adapted to treat other boundary value problems.

  16. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  17. Symmetry breaking motion of a vortex pair in a driven cavity

    NASA Astrophysics Data System (ADS)

    McHugh, John; Osman, Kahar; Farias, Jason

    2002-11-01

    The two-dimensional driven cavity problem with an anti-symmetric sinusoidal forcing has been found to exhibit a subcritical symmetry breaking bifurcation (Farias and McHugh, Phys. Fluids, 2002). Equilibrium solutions are either a symmetric vortex pair or an asymmetric motion. The asymmetric motion is an asymmetric vortex pair at low Reynolds numbers, but merges into a three vortex motion at higher Reynolds numbers. The asymmetric solution is obtained by initiating the flow with a single vortex centered in the domain. Symmetric motion is obtained with no initial vortex, or weak initial vortex. The steady three-vortex motion occurs at a Reynolds number of approximately 3000, where the symmetric vortex pair has already gone through a Hopf bifurcation. Further two-dimensional results show that forcing with two full oscillations across the top of the cavity results in two steady vortex motions, depending on initial conditions. Three-dimensional results have even more steady solutions. The results are computational and theoretical.

  18. Strategies and Methodologies for Developing Microbial Detoxification Systems to Mitigate Mycotoxins

    PubMed Central

    Zhu, Yan; Hassan, Yousef I.; Lepp, Dion; Shao, Suqin; Zhou, Ting

    2017-01-01

    Mycotoxins, the secondary metabolites of mycotoxigenic fungi, have been found in almost all agricultural commodities worldwide, causing enormous economic losses in livestock production and severe human health problems. Compared to traditional physical adsorption and chemical reactions, interest in biological detoxification methods that are environmentally sound, safe and highly efficient has seen a significant increase in recent years. However, researchers in this field have been facing tremendous unexpected challenges and are eager to find solutions. This review summarizes and assesses the research strategies and methodologies in each phase of the development of microbiological solutions for mycotoxin mitigation. These include screening of functional microbial consortia from natural samples, isolation and identification of single colonies with biotransformation activity, investigation of the physiological characteristics of isolated strains, identification and assessment of the toxicities of biotransformation products, purification of functional enzymes and the application of mycotoxin decontamination to feed/food production. A full understanding and appropriate application of this tool box should be helpful towards the development of novel microbiological solutions on mycotoxin detoxification. PMID:28387743

  19. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  20. Simulation and statistics: Like rhythm and song

    NASA Astrophysics Data System (ADS)

    Othman, Abdul Rahman

    2013-04-01

    Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

  1. FOURTH SEMINAR TO THE MEMORY OF D.N. KLYSHKO: Algebraic solution of the synthesis problem for coded sequences

    NASA Astrophysics Data System (ADS)

    Leukhin, Anatolii N.

    2005-08-01

    The algebraic solution of a 'complex' problem of synthesis of phase-coded (PC) sequences with the zero level of side lobes of the cyclic autocorrelation function (ACF) is proposed. It is shown that the solution of the synthesis problem is connected with the existence of difference sets for a given code dimension. The problem of estimating the number of possible code combinations for a given code dimension is solved. It is pointed out that the problem of synthesis of PC sequences is related to the fundamental problems of discrete mathematics and, first of all, to a number of combinatorial problems, which can be solved, as the number factorisation problem, by algebraic methods by using the theory of Galois fields and groups.

  2. Exact solutions for the collaborative pickup and delivery problem.

    PubMed

    Gansterer, Margaretha; Hartl, Richard F; Salzmann, Philipp E H

    2018-01-01

    In this study we investigate the decision problem of a central authority in pickup and delivery carrier collaborations. Customer requests are to be redistributed among participants, such that the total cost is minimized. We formulate the problem as multi-depot traveling salesman problem with pickups and deliveries. We apply three well-established exact solution approaches and compare their performance in terms of computational time. To avoid unrealistic solutions with unevenly distributed workload, we extend the problem by introducing minimum workload constraints. Our computational results show that, while for the original problem Benders decomposition is the method of choice, for the newly formulated problem this method is clearly dominated by the proposed column generation approach. The obtained results can be used as benchmarks for decentralized mechanisms in collaborative pickup and delivery problems.

  3. On pendulums and air resistance. The mathematics and physics of Denis Diderot

    NASA Astrophysics Data System (ADS)

    Dahmen, Sílvio R.

    2015-09-01

    In this article Denis Diderot's Fifth Memoir of 1748 on the problem of a pendulum damped by air resistance is discussed in its historical as well as mathematical aspects. Diderot wrote the Memoir in order to clarify an assumption Newton made without further justification in the first pages of the Principia in connection with an experiment to verify the Third Law of Motion using colliding pendulums. To explain the differences between experimental and theoretical values, Newton assumed the bob was traversed. By giving Newton's arguments a mathematical scaffolding and recasting his geometrical reasoning in the language of differential calculus, Diderot provided a step-by-step solution guide to the problem. He also showed that Newton's assumption was equivalent to having assumed F R proportional the bob's velocity v, when in fact he believed it should be replaced by F R ˜ v 2. His solution is presented in full detail and his results are compared to those obtained from a Lindstedt-Poincaré approximation for an oscillator with quadratic damping. It is shown that, up to a prefactor, both results coincide. Some results that follow from his approach are presented and discussed for the first time. Experimental evidence to support Diderot's or Newton's claims is discussed together with the limitations of their solutions. Some misprints in the original memoir are pointed out.

  4. A Galleria Boundary Element Method for two-dimensional nonlinear magnetostatics

    NASA Astrophysics Data System (ADS)

    Brovont, Aaron D.

    The Boundary Element Method (BEM) is a numerical technique for solving partial differential equations that is used broadly among the engineering disciplines. The main advantage of this method is that one needs only to mesh the boundary of a solution domain. A key drawback is the myriad of integrals that must be evaluated to populate the full system matrix. To this day these integrals have been evaluated using numerical quadrature. In this research, a Galerkin formulation of the BEM is derived and implemented to solve two-dimensional magnetostatic problems with a focus on accurate, rapid computation. To this end, exact, closed-form solutions have been derived for all the integrals comprising the system matrix as well as those required to compute fields in post-processing; the need for numerical integration has been eliminated. It is shown that calculation of the system matrix elements using analytical solutions is 15-20 times faster than with numerical integration of similar accuracy. Furthermore, through the example analysis of a c-core inductor, it is demonstrated that the present BEM formulation is a competitive alternative to the Finite Element Method (FEM) for linear magnetostatic analysis. Finally, the BEM formulation is extended to analyze nonlinear magnetostatic problems via the Dual Reciprocity Method (DRBEM). It is shown that a coarse, meshless analysis using the DRBEM is able to achieve RMS error of 3-6% compared to a commercial FEM package in lightly saturated conditions.

  5. Development of numerical techniques for simulation of magnetogasdynamics and hypersonic chemistry

    NASA Astrophysics Data System (ADS)

    Damevin, Henri-Marie

    Magnetogasdynamics, the science concerned with the mutual interaction between electromagnetic field and flow of electrically conducting gas, offers promising advances in flow control and propulsion of future hypersonic vehicles. Numerical simulations are essential for understanding phenomena, and for research and development. The current dissertation is devoted to the development and validation of numerical algorithms for the solution of multidimensional magnetogasdynamic equations and the simulation of hypersonic high-temperature effects. Governing equations are derived, based on classical magnetogasdynamic assumptions. Two sets of equations are considered, namely the full equations and equations in the low magnetic Reynolds number approximation. Equations are expressed in a suitable formulation for discretization by finite differences in a computational space. For the full equations, Gauss law for magnetism is enforced using Powell's methodology. The time integration method is a four-stage modified Runge-Kutta scheme, amended with a Total Variation Diminishing model in a postprocessing stage. The eigensystem, required for the Total Variation Diminishing scheme, is derived in generalized three-dimensional coordinate system. For the simulation of hypersonic high-temperature effects, two chemical models are utilized, namely a nonequilibrium model and an equilibrium model. A loosely coupled approach is implemented to communicate between the magnetogasdynamic equations and the chemical models. The nonequilibrium model is a one-temperature, five-species, seventeen-reaction model solved by an implicit flux-vector splitting scheme. The chemical equilibrium model computes thermodynamics properties using curve fit procedures. Selected results are provided, which explore the different features of the numerical algorithms. The shock-capturing properties are validated for shock-tube simulations using numerical solutions reported in the literature. The computations of superfast flows over corners and in convergent channels demonstrate the performances of the algorithm in multiple dimensions. The implementation of diffusion terms is validated by solving the magnetic Rayleigh problem and Hartmann problem, for which analytical solutions are available. Prediction of blunt-body type flow are investigated and compared with numerical solutions reported in the literature. The effectiveness of the chemical models for hypersonic flow over blunt body is examined in various flow conditions. It is shown that the proposed schemes perform well in a variety of test cases, though some limitations have been identified.

  6. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    NASA Astrophysics Data System (ADS)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown. Topics covered describe why ProLE solutions are needed from an economic and technical aspect, a high level discussion of how the distributive system works, speed benchmarking, and finally, a brief survey of applications including advanced aberrations for lens sensitivity and flare studies, optical-proximity-correction for a bitcell and an application that will allow evaluation of the potential of a design to have systematic failures during fabrication.

  7. Moving boundary problems for a rarefied gas: Spatially one-dimensional case

    NASA Astrophysics Data System (ADS)

    Tsuji, Tetsuro; Aoki, Kazuo

    2013-10-01

    Unsteady flows of a rarefied gas in a full space caused by an oscillation of an infinitely wide plate in its normal direction are investigated numerically on the basis of the Bhatnagar-Gross-Krook (BGK) model of the Boltzmann equation. The paper aims at showing properties and difficulties inherent to moving boundary problems in kinetic theory of gases using a simple one-dimensional setting. More specifically, the following two problems are considered: (Problem I) the plate starts a forced harmonic oscillation (forced motion); (Problem II) the plate, which is subject to an external restoring force obeying Hooke’s law, is displaced from its equilibrium position and released (free motion). The physical interest in Problem I lies in the propagation of nonlinear acoustic waves in a rarefied gas, whereas that in Problem II in the decay rate of the oscillation of the plate. An accurate numerical method, which is capable of describing singularities caused by the oscillating plate, is developed on the basis of the method of characteristics and is applied to the two problems mentioned above. As a result, the unsteady behavior of the solution, such as the propagation of discontinuities and some weaker singularities in the molecular velocity distribution function, are clarified. Some results are also compared with those based on the existing method.

  8. TOPEX/POSEIDON tides estimated using a global inverse model

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Bennett, Andrew F.; Foreman, Michael G. G.

    1994-01-01

    Altimetric data from the TOPEX/POSEIDON mission will be used for studies of global ocean circulation and marine geophysics. However, it is first necessary to remove the ocean tides, which are aliased in the raw data. The tides are constrained by the two distinct types of information: the hydrodynamic equations which the tidal fields of elevations and velocities must satisfy, and direct observational data from tide gauges and satellite altimetry. Here we develop and apply a generalized inverse method, which allows us to combine rationally all of this information into global tidal fields best fitting both the data and the dynamics, in a least squares sense. The resulting inverse solution is a sum of the direct solution to the astronomically forced Laplace tidal equations and a linear combination of the representers for the data functionals. The representer functions (one for each datum) are determined by the dynamical equations, and by our prior estimates of the statistics or errors in these equations. Our major task is a direct numerical calculation of these representers. This task is computationally intensive, but well suited to massively parallel processing. By calculating the representers we reduce the full (infinite dimensional) problem to a relatively low-dimensional problem at the outset, allowing full control over the conditioning and hence the stability of the inverse solution. With the representers calculated we can easily update our model as additional TOPEX/POSEIDON data become available. As an initial illustration we invert harmonic constants from a set of 80 open-ocean tide gauges. We then present a practical scheme for direct inversion of TOPEX/POSEIDON crossover data. We apply this method to 38 cycles of geophysical data records (GDR) data, computing preliminary global estimates of the four principal tidal constituents, M(sub 2), S(sub 2), K(sub 1) and O(sub 1). The inverse solution yields tidal fields which are simultaneously smoother, and in better agreement with altimetric and ground truth data, than previously proposed tidal models. Relative to the 'default' tidal corrections provided with the TOPEX/POSEIDON GDR, the inverse solution reduces crossover difference variances significantly (approximately 20-30%), even though only a small number of free parameters (approximately equal to 1000) are actually fit to the crossover data.

  9. Region of validity of the finite–temperature Thomas–Fermi model with respect to quantum and exchange corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru

    We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.

  10. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: IV. Generalized matrix analysis of linear compartment systems.

    PubMed

    Langenbucher, Frieder

    2005-01-01

    A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.

  11. Enhancement Of Water-Jet Stripping Of Foam

    NASA Technical Reports Server (NTRS)

    Cosby, Steven A.; Shockney, Charles H.; Bates, Keith E.; Shalala, John P.; Daniels, Larry S.

    1995-01-01

    Improved robotic high-pressure-water-jet system strips foam insulation from parts without removing adjacent coating materials like paints, primers, and sealants. Even injects water into crevices and blind holes to clean out foam, without harming adjacent areas. Eliminates both cost of full stripping and recoating and problem of disposing of toxic solutions used in preparation for coating. Developed for postflight refurbishing of aft skirts of booster rockets. System includes six-axis robot provided with special end effector and specially written control software, called Aftfoam. Adaptable to cleaning and stripping in other industrial settings.

  12. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks.

    PubMed

    Hedne, Mikael R; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the "warmth ratings" previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related.

  13. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    PubMed Central

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  14. Potential interoperability problems facing multi-site radiation oncology centers in The Netherlands

    NASA Astrophysics Data System (ADS)

    Scheurleer, J.; Koken, Ph; Wessel, R.

    2014-03-01

    Aim: To identify potential interoperability problems facing multi-site Radiation Oncology (RO) departments in the Netherlands and solutions for unambiguous multi-system workflows. Specific challenges confronting the RO department of VUmc (RO-VUmc), which is soon to open a satellite department, were characterized. Methods: A nationwide questionnaire survey was conducted to identify possible interoperability problems and solutions. Further detailed information was obtained by in-depth interviews at 3 Dutch RO institutes that already operate in more than one site. Results: The survey had a 100% response rate (n=21). Altogether 95 interoperability problems were described. Most reported problems were on a strategic and semantic level. The majority were DICOM(-RT) and HL7 related (n=65), primarily between treatment planning and verification systems or between departmental and hospital systems. Seven were identified as being relevant for RO-VUmc. Departments have overcome interoperability problems with their own, or with tailor-made vendor solutions. There was little knowledge about or utilization of solutions developed by Integrating the Healthcare Enterprise Radiation Oncology (IHE-RO). Conclusions: Although interoperability problems are still common, solutions have been identified. Awareness of IHE-RO needs to be raised. No major new interoperability problems are predicted as RO-VUmc develops into a multi-site department.

  15. Completing the Physical Representation of Quantum Algorithms Provides a Quantitative Explanation of Their Computational Speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2018-03-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.

  16. Approaches to eliminate waste and reduce cost for recycling glass.

    PubMed

    Chao, Chien-Wen; Liao, Ching-Jong

    2011-12-01

    In recent years, the issue of environmental protection has received considerable attention. This paper adds to the literature by investigating a scheduling problem in the manufacturing of a glass recycling factory in Taiwan. The objective is to minimize the sum of the total holding cost and loss cost. We first represent the problem as an integer programming (IP) model, and then develop two heuristics based on the IP model to find near-optimal solutions for the problem. To validate the proposed heuristics, comparisons between optimal solutions from the IP model and solutions from the current method are conducted. The comparisons involve two problem sizes, small and large, where the small problems range from 15 to 45 jobs, and the large problems from 50 to 100 jobs. Finally, a genetic algorithm is applied to evaluate the proposed heuristics. Computational experiments show that the proposed heuristics can find good solutions in a reasonable time for the considered problem. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    PubMed

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  18. Exploiting Elementary Landscapes for TSP, Vehicle Routing and Scheduling

    DTIC Science & Technology

    2015-09-03

    Traveling Salesman Problem (TSP) and Graph Coloring are elementary. Problems such as MAX-kSAT are a superposition of k elementary landscapes. This...search space. Problems such as the Traveling Salesman Problem (TSP), Graph Coloring, the Frequency Assignment Problem , as well as Min-Cut and Max-Cut...echoing our earlier esults on the Traveling Salesman Problem . Using two locally optimal solutions as “parent” solutions, we have developed a

  19. Analysing student written solutions to investigate if problem-solving processes are evident throughout

    NASA Astrophysics Data System (ADS)

    Kelly, Regina; McLoughlin, Eilish; Finlayson, Odilla E.

    2016-07-01

    An interdisciplinary science course has been implemented at a university with the intention of providing students the opportunity to develop a range of key skills in relation to: real-world connections of science, problem-solving, information and communications technology use and team while linking subject knowledge in each of the science disciplines. One of the problems used in this interdisciplinary course has been selected to evaluate if it affords students the opportunity to explicitly display problem-solving processes. While the benefits of implementing problem-based learning have been well reported, far less research has been devoted to methods of assessing student problem-solving solutions. A problem-solving theoretical framework was used as a tool to assess student written solutions to indicate if problem-solving processes were present. In two academic years, student problem-solving processes were satisfactory for exploring and understanding, representing and formulating, and planning and executing, indicating that student collaboration on problems is a good initiator of developing these processes. In both academic years, students displayed poor monitoring and reflecting (MR) processes at the intermediate level. A key impact of evaluating student work in this way is that it facilitated meaningful feedback about the students' problem-solving process rather than solely assessing the correctness of problem solutions.

  20. DROMO formulation for planar motions: solution to the Tsien problem

    NASA Astrophysics Data System (ADS)

    Urrutxua, Hodei; Morante, David; Sanjurjo-Rivo, Manuel; Peláez, Jesús

    2015-06-01

    The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen's ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo-Stiefel methods.

  1. Exact Analytical Solutions for Elastodynamic Impact

    DTIC Science & Technology

    2015-11-30

    corroborated by derivation of exact discrete solutions from recursive equations for the impact problems. 15. SUBJECT TERMS One-dimensional impact; Elastic...wave propagation; Laplace transform; Floor function; Discrete solutions 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18...impact Elastic wave propagation Laplace transform Floor function Discrete solutionsWe consider the one-dimensional impact problem in which a semi

  2. Solution of a Complex Least Squares Problem with Constrained Phase.

    PubMed

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  3. Industrial noise control: Some case histories, volume 1

    NASA Technical Reports Server (NTRS)

    Hart, F. D.; Neal, C. L.; Smetana, F. O.

    1974-01-01

    A collection of solutions to industrial noise problems is presented. Each problem is described in simple terms, with noise measurements where available, and the solution is given, often with explanatory figures. Where the solution rationale is not obvious, an explanatory paragraph is usually appended. As a preface to these solutions, a short exposition is provided of some of the guiding concepts used by noise control engineers in devising their solutions.

  4. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  5. Local search heuristic for the discrete leader-follower problem with multiple follower objectives

    NASA Astrophysics Data System (ADS)

    Kochetov, Yury; Alekseeva, Ekaterina; Mezmaz, Mohand

    2016-10-01

    We study a discrete bilevel problem, called as well as leader-follower problem, with multiple objectives at the lower level. It is assumed that constraints at the upper level can include variables of both levels. For such ill-posed problem we define feasible and optimal solutions for pessimistic case. A central point of this work is a two stage method to get a feasible solution under the pessimistic case, given a leader decision. The target of the first stage is a follower solution that violates the leader constraints. The target of the second stage is a pessimistic feasible solution. Each stage calls a heuristic and a solver for a series of particular mixed integer programs. The method is integrated inside a local search based heuristic that is designed to find near-optimal leader solutions.

  6. Elasticity Theory Solution of the Problem on Plane Bending of a Narrow Layered Cantilever Beam by Loads at Its Free End

    NASA Astrophysics Data System (ADS)

    Goryk, A. V.; Koval'chuk, S. B.

    2018-05-01

    An exact elasticity theory solution for the problem on plane bending of a narrow layered composite cantilever beam by tangential and normal loads distributed on its free end is presented. Components of the stress-strain state are found for the whole layers package by directly integrating differential equations of the plane elasticity theory problem by using an analytic representation of piecewise constant functions of the mechanical characteristics of layer materials. The continuous solution obtained is realized for a four-layer beam with account of kinematic boundary conditions simulating the rigid fixation of its one end. The solution obtained allows one to predict the strength and stiffness of composite cantilever beams and to construct applied analytical solutions for various problems on the elastic bending of layered beams.

  7. Evolution of magnetic field and atmospheric response. I - Three-dimensional formulation by the method of projected characteristics. II - Formulation of proper boundary equations. [stellar magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Nakagawa, Y.

    1981-01-01

    The method described as the method of nearcharacteristics by Nakagawa (1980) is renamed the method of projected characteristics. Making full use of properties of the projected characteristics, a new and simpler formulation is developed. As a result, the formulation for the examination of the general three-dimensional problems is presented. It is noted that since in practice numerical solutions must be obtained, the final formulation is given in the form of difference equations. The possibility of including effects of viscous and ohmic dissipations in the formulation is considered, and the physical interpretation is discussed. A systematic manner is then presented for deriving physically self-consistent, time-dependent boundary equations for MHD initial boundary problems. It is demonstrated that the full use of the compatibility equations (differential equations relating variations at two spatial locations and times) is required in determining the time-dependent boundary conditions. In order to provide a clear physical picture as an example, the evolution of axisymmetric global magnetic field by photospheric differential rotation is considered.

  8. Nonadiabatic effects in electronic and nuclear dynamics

    PubMed Central

    Bircher, Martin P.; Liberatore, Elisa; Browning, Nicholas J.; Brickel, Sebastian; Hofmann, Cornelia; Patoz, Aurélien; Unke, Oliver T.; Zimmermann, Tomáš; Chergui, Majed; Hamm, Peter; Keller, Ursula; Meuwly, Markus; Woerner, Hans-Jakob; Vaníček, Jiří; Rothlisberger, Ursula

    2018-01-01

    Due to their very nature, ultrafast phenomena are often accompanied by the occurrence of nonadiabatic effects. From a theoretical perspective, the treatment of nonadiabatic processes makes it necessary to go beyond the (quasi) static picture provided by the time-independent Schrödinger equation within the Born-Oppenheimer approximation and to find ways to tackle instead the full time-dependent electronic and nuclear quantum problem. In this review, we give an overview of different nonadiabatic processes that manifest themselves in electronic and nuclear dynamics ranging from the nonadiabatic phenomena taking place during tunnel ionization of atoms in strong laser fields to the radiationless relaxation through conical intersections and the nonadiabatic coupling of vibrational modes and discuss the computational approaches that have been developed to describe such phenomena. These methods range from the full solution of the combined nuclear-electronic quantum problem to a hierarchy of semiclassical approaches and even purely classical frameworks. The power of these simulation tools is illustrated by representative applications and the direct confrontation with experimental measurements performed in the National Centre of Competence for Molecular Ultrafast Science and Technology. PMID:29376108

  9. Modern Techniques in Acoustical Signal and Image Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J V

    2002-04-04

    Acoustical signal processing problems can lead to some complex and intricate techniques to extract the desired information from noisy, sometimes inadequate, measurements. The challenge is to formulate a meaningful strategy that is aimed at performing the processing required even in the face of uncertainties. This strategy can be as simple as a transformation of the measured data to another domain for analysis or as complex as embedding a full-scale propagation model into the processor. The aims of both approaches are the same--to extract the desired information and reject the extraneous, that is, develop a signal processing scheme to achieve thismore » goal. In this paper, we briefly discuss this underlying philosophy from a ''bottom-up'' approach enabling the problem to dictate the solution rather than visa-versa.« less

  10. Problem Solution Project: Transforming Curriculum and Empowering Urban Students and Teachers

    ERIC Educational Resources Information Center

    Jarrett, Olga S.; Stenhouse, Vera

    2011-01-01

    This article presents findings of 6 years of implementing a Problem Solution Project, an assignment influenced by service learning, problem-based learning, critical theory, and critical pedagogy whereby teachers help children tackle real problems. Projects of 135 teachers in an urban certification/master's program were summarized by cohort year…

  11. A Coding Scheme for Analysing Problem-Solving Processes of First-Year Engineering Students

    ERIC Educational Resources Information Center

    Grigg, Sarah J.; Benson, Lisa C.

    2014-01-01

    This study describes the development and structure of a coding scheme for analysing solutions to well-structured problems in terms of cognitive processes and problem-solving deficiencies for first-year engineering students. A task analysis approach was used to assess students' problem solutions using the hierarchical structure from a…

  12. Skill Acquisition: Compilation of Weak-Method Problem Solutions.

    ERIC Educational Resources Information Center

    Anderson, John R.

    According to the ACT theory of skill acquisition, cognitive skills are encoded by a set of productions, which are organized according to a hierarchical goal structure. People solve problems in new domains by applying weak problem-solving procedures to declarative knowledge they have about this domain. From these initial problem solutions,…

  13. Analyzing Interpersonal Problem Solving in Terms of Solution Focused Approach and Humor Styles of University Student

    ERIC Educational Resources Information Center

    Koc, Hayri; Arslan, Coskun

    2017-01-01

    In this study university students interpersonal problem solving approaches were investigated in terms of solution focused approach and humor styles. The participants were 773 (542 female and 231 male, between 17-33 years old) university students. To determine the university students' problem solving approaches "Interpersonal Problem Solving…

  14. A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.

  15. Incremental planning to control a blackboard-based problem solver

    NASA Technical Reports Server (NTRS)

    Durfee, E. H.; Lesser, V. R.

    1987-01-01

    To control problem solving activity, a planner must resolve uncertainty about which specific long-term goals (solutions) to pursue and about which sequences of actions will best achieve those goals. A planner is described that abstracts the problem solving state to recognize possible competing and compatible solutions and to roughly predict the importance and expense of developing these solutions. With this information, the planner plans sequences of problem solving activities that most efficiently resolve its uncertainty about which of the possible solutions to work toward. The planner only details actions for the near future because the results of these actions will influence how (and whether) a plan should be pursued. As problem solving proceeds, the planner adds new details to the plan incrementally, and monitors and repairs the plan to insure it achieves its goals whenever possible. Through experiments, researchers illustrate how these new mechanisms significantly improve problem solving decisions and reduce overall computation. They briefly discuss current research directions, including how these mechanisms can improve a problem solver's real-time response and can enhance cooperation in a distributed problem solving network.

  16. Comment on “A similarity solution for laminar thermal boundary layer over a flat plate with a convective surface boundary condition” by A. Aziz, Comm. Nonlinear Sci. Numer. Simul. 2009;14:1064-8

    NASA Astrophysics Data System (ADS)

    Magyari, Eugen

    2011-01-01

    In a recent paper published in this Journal the title problem has been investigated numerically. In the present paper the exact solution for the temperature boundary layer is given in terms of the solution of the flow problem (the Blasius problem) in a compact integral form.

  17. From "Wiggly Structures" to "Unshaky Towers": Problem Framing, Solution Finding, and Negotiation of Courses of Actions During a Civil Engineering Unit for Elementary Students.

    ERIC Educational Resources Information Center

    Roth, Wolff-Michael

    1995-01-01

    Investigated problem- and solution-related activity of (n=28) fourth and fifth graders in ill-defined and open-ended settings. In the course of their negotiations, students demonstrated an uncanny competence to frame and reframe problems and solutions and to decide courses of actions of different complexities in spite of the ambiguous nature of…

  18. Trading a Problem-solving Task

    NASA Astrophysics Data System (ADS)

    Matsubara, Shigeo

    This paper focuses on a task allocation problem, especially cases where the task is to find a solution in a search problem or a constraint satisfaction problem. If the search problem is hard to solve, a contractor may fail to find a solution. Here, the more computational resources such as the CPU time the contractor invests in solving the search problem, the more a solution is likely to be found. This brings about a new problem that a contractee has to find an appropriate level of the quality in a task achievement as well as to find an efficient allocation of a task among contractors. For example, if the contractee asks the contractor to find a solution with certainty, the payment from the contractee to the contractor may exceed the contractee's benefit from obtaining a solution, which discourages the contractee from trading a task. However, solving this problem is difficult because the contractee cannot ascertain the contractor's problem-solving ability such as the amount of available resources and knowledge (e.g. algorithms, heuristics) or monitor what amount of resources are actually invested in solving the allocated task. To solve this problem, we propose a task allocation mechanism that is able to choose an appropriate level of the quality in a task achievement and prove that this mechanism guarantees that each contractor reveals its true information. Moreover, we show that our mechanism can increase the contractee's utility compared with a simple auction mechanism by using computer simulation.

  19. General Tricomi-Rassias problem and oblique derivative problem for generalized Chaplygin equations

    NASA Astrophysics Data System (ADS)

    Wen, Guochun; Chen, Dechang; Cheng, Xiuzhen

    2007-09-01

    Many authors have discussed the Tricomi problem for some second order equations of mixed type, which has important applications in gas dynamics. In particular, Bers proposed the Tricomi problem for Chaplygin equations in multiply connected domains [L. Bers, Mathematical Aspects of Subsonic and Transonic Gas Dynamics, Wiley, New York, 1958]. And Rassias proposed the exterior Tricomi problem for mixed equations in a doubly connected domain and proved the uniqueness of solutions for the problem [J.M. Rassias, Lecture Notes on Mixed Type Partial Differential Equations, World Scientific, Singapore, 1990]. In the present paper, we discuss the general Tricomi-Rassias problem for generalized Chaplygin equations. This is one general oblique derivative problem that includes the exterior Tricomi problem as a special case. We first give the representation of solutions of the general Tricomi-Rassias problem, and then prove the uniqueness and existence of solutions for the problem by a new method. In this paper, we shall also discuss another general oblique derivative problem for generalized Chaplygin equations.

  20. Open problems in artificial life.

    PubMed

    Bedau, M A; McCaskill, J S; Packard, N H; Rasmussen, S; Adami, C; Green, D G; Ikegami, T; Kaneko, K; Ray, T S

    2000-01-01

    This article lists fourteen open problems in artificial life, each of which is a grand challenge requiring a major advance on a fundamental issue for its solution. Each problem is briefly explained, and, where deemed helpful, some promising paths to its solution are indicated.

  1. Convergence of a sequence of dual variables at the solution of a completely degenerate problem of linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dikin, I.

    1994-12-31

    We survey results about the convergence of the primal affine scaling method at solutions of a completely degenerate problem of linear programming. Moreover we are studying the case when a next approximation is on the boundary of the affine scaling ellipsoid. Convergence of successive approximation to an interior point u of the solution for the dual problem is proved. Coordinates of the vector u are determined only by the input data of the problem; they do not depend of the choice of the starting point.

  2. Problem of gas accretion on a gravitational center

    NASA Technical Reports Server (NTRS)

    Ladygin, V. A.

    1980-01-01

    A method of the approximated solution of the problem of accretion on a rapidly moving gravitational center is developed. This solution is obtained in the vicinity of the axis of symmetry in the region of the potential flow. The solution of the problem of stationary gas accretion on a moving gravitational center simulates the movement of a substance in interstellar space in the vicinity of a black hole. A detailed picture of gas accretion on a black hole is of interest in connection with the problem of observation of black holes.

  3. Working wonders? investigating insight with magic tricks.

    PubMed

    Danek, Amory H; Fraps, Thomas; von Müller, Albrecht; Grothe, Benedikt; Ollinger, Michael

    2014-02-01

    We propose a new approach to differentiate between insight and noninsight problem solving, by introducing magic tricks as problem solving domain. We argue that magic tricks are ideally suited to investigate representational change, the key mechanism that yields sudden insight into the solution of a problem, because in order to gain insight into the magicians' secret method, observers must overcome implicit constraints and thus change their problem representation. In Experiment 1, 50 participants were exposed to 34 different magic tricks, asking them to find out how the trick was accomplished. Upon solving a trick, participants indicated if they had reached the solution either with or without insight. Insight was reported in 41.1% of solutions. The new task domain revealed differences in solution accuracy, time course and solution confidence with insight solutions being more likely to be true, reached earlier, and obtaining higher confidence ratings. In Experiment 2, we explored which role self-imposed constraints actually play in magic tricks. 62 participants were presented with 12 magic tricks. One group received verbal cues, providing solution relevant information without giving the solution away. The control group received no informative cue. Experiment 2 showed that participants' constraints were suggestible to verbal cues, resulting in higher solution rates. Thus, magic tricks provide more detailed information about the differences between insightful and noninsightful problem solving, and the underlying mechanisms that are necessary to have an insight. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach

    NASA Technical Reports Server (NTRS)

    Chien, S.; Gratch, J.

    1994-01-01

    One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.

  5. Control optimization of a lifting body entry problem by an improved and a modified method of perturbation function. Ph.D. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1974-01-01

    A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.

  6. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  7. Boundary-integral methods in elasticity and plasticity. [solutions of boundary value problems

    NASA Technical Reports Server (NTRS)

    Mendelson, A.

    1973-01-01

    Recently developed methods that use boundary-integral equations applied to elastic and elastoplastic boundary value problems are reviewed. Direct, indirect, and semidirect methods using potential functions, stress functions, and displacement functions are described. Examples of the use of these methods for torsion problems, plane problems, and three-dimensional problems are given. It is concluded that the boundary-integral methods represent a powerful tool for the solution of elastic and elastoplastic problems.

  8. Multiple Revolution Solutions for the Perturbed Lambert Problem using the Method of Particular Solutions and Picard Iteration

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.

    2017-12-01

    We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.

  9. Definition and use of Solution-focused Sustainability Assessment: A novel approach to generate, explore and decide on sustainable solutions for wicked problems.

    PubMed

    Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A

    2016-05-01

    This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder participation and the intensive communication throughout the project resulted in broad support for both the scientific approaches and results, as well as for policy implementation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  11. Method and Apparatus for Powered Descent Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)

    2013-01-01

    A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.

  12. The Pizza Problem: A Solution with Sequences

    ERIC Educational Resources Information Center

    Shafer, Kathryn G.; Mast, Caleb J.

    2008-01-01

    This article addresses the issues of coaching and assessing. A preservice middle school teacher's unique solution to the Pizza problem was not what the professor expected. The student's solution strategy, based on sequences and a reinvention of Pascal's triangle, is explained in detail. (Contains 8 figures.)

  13. Combination of graph heuristics in producing initial solution of curriculum based course timetabling problem

    NASA Astrophysics Data System (ADS)

    Wahid, Juliana; Hussin, Naimah Mohd

    2016-08-01

    The construction of population of initial solution is a crucial task in population-based metaheuristic approach for solving curriculum-based university course timetabling problem because it can affect the convergence speed and also the quality of the final solution. This paper presents an exploration on combination of graph heuristics in construction approach in curriculum based course timetabling problem to produce a population of initial solutions. The graph heuristics were set as single and combination of two heuristics. In addition, several ways of assigning courses into room and timeslot are implemented. All settings of heuristics are then tested on the same curriculum based course timetabling problem instances and are compared with each other in terms of number of population produced. The result shows that combination of saturation degree followed by largest degree heuristic produce the highest number of population of initial solutions. The results from this study can be used in the improvement phase of algorithm that uses population of initial solutions.

  14. A hybrid heuristic for the multiple choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd

    2013-08-01

    In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.

  15. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  16. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  17. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  18. Turing patterns in parabolic systems of conservation laws and numerically observed stability of periodic waves

    NASA Astrophysics Data System (ADS)

    Barker, Blake; Jung, Soyeun; Zumbrun, Kevin

    2018-03-01

    Turing patterns on unbounded domains have been widely studied in systems of reaction-diffusion equations. However, up to now, they have not been studied for systems of conservation laws. Here, we (i) derive conditions for Turing instability in conservation laws and (ii) use these conditions to find families of periodic solutions bifurcating from uniform states, numerically continuing these families into the large-amplitude regime. For the examples studied, numerical stability analysis suggests that stable periodic waves can emerge either from supercritical Turing bifurcations or, via secondary bifurcation as amplitude is increased, from subcritical Turing bifurcations. This answers in the affirmative a question of Oh-Zumbrun whether stable periodic solutions of conservation laws can occur. Determination of a full small-amplitude stability diagram - specifically, determination of rigorous Eckhaus-type stability conditions - remains an interesting open problem.

  19. Numerical modeling of thermal refraction inliquids in the transient regime.

    PubMed

    Kovsh, D; Hagan, D; Van Stryland, E

    1999-04-12

    We present the results of modeling of nanosecond pulse propagation in optically absorbing liquid media. Acoustic and electromagnetic wave equations must be solved simultaneously to model refractive index changes due to thermal expansion and/or electrostriction, which are highly transient phenomena on a nanosecond time scale. Although we consider situations with cylindrical symmetry and where the paraxial approximation is valid, this is still a computation-intensive problem, as beam propagation through optically thick media must be modeled. We compare the full solution of the acoustic wave equation with the approximation of instantaneous expansion (steady-state solution) and hence determine the regimes of validity of this approximation. We also find that the refractive index change obtained from the photo-acoustic equation overshoots its steady-state value once the ratio between the pulsewidth and the acoustic transit time exceeds a factor of unity.

  20. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    PubMed

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  1. Possible Solutions as a Concept in Behavior Change Interventions.

    PubMed

    Mahoney, Diane E

    2018-04-24

    Nurses are uniquely positioned to implement behavior change interventions. Yet, nursing interventions have traditionally resulted from nurses problem-solving rather than allowing the patient to self-generate possible solutions for attaining specific health outcomes. The purpose of this review is to clarify the meaning of possible solutions in behavior change interventions. Walker and Avant's method on concept analysis serves as the framework for examination of the possible solutions. Possible solutions can be defined as continuous strategies initiated by patients and families to overcome existing health problems. As nurses engage in behavior change interventions, supporting patients and families in problem-solving will optimize health outcomes and transform clinical practice. © 2018 NANDA International, Inc.

  2. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.

  3. Insight and search in Katona's five-square problem.

    PubMed

    Ollinger, Michael; Jones, Gary; Knoblich, Günther

    2014-01-01

    Insights are often productive outcomes of human thinking. We provide a cognitive model that explains insight problem solving by the interplay of problem space search and representational change, whereby the problem space is constrained or relaxed based on the problem representation. By introducing different experimental conditions that either constrained the initial search space or helped solvers to initiate a representational change, we investigated the interplay of problem space search and representational change in Katona's five-square problem. Testing 168 participants, we demonstrated that independent hints relating to the initial search space and to representational change had little effect on solution rates. However, providing both hints caused a significant increase in solution rates. Our results show the interplay between problem space search and representational change in insight problem solving: The initial problem space can be so large that people fail to encounter impasse, but even when representational change is achieved the resulting problem space can still provide a major obstacle to finding the solution.

  4. FLASH Technology: Full-Scale Hospital Waste Water Treatments Adopted in Aceh

    NASA Astrophysics Data System (ADS)

    Rame; Tridecima, Adeodata; Pranoto, Hadi; Moesliem; Miftahuddin

    2018-02-01

    A Hospital waste water contains a complex mixture of hazardous chemicals and harmful microbes, which can pose a threat to the environment and public health. Some efforts have been carried out in Nangroe Aceh Darussalam (Aceh), Indonesia with the objective of treating hospital waste water effluents on-site before its discharge. Flash technology uses physical and biological pre-treatment, followed by advanced oxidation process based on catalytic ozonation and followed by GAC and PAC filtration. Flash Full-Scale Hospital waste water Treatments in Aceh from different district have been adopted and investigated. Referring to the removal efficiency of macro-pollutants, the collected data demonstrate good removal efficiency of macro-pollutants using Flash technologies. In general, Flash technologies could be considered a solution to the problem of managing hospital waste water.

  5. High order filtering methods for approximating hyberbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1990-01-01

    In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.

  6. The Osher scheme for non-equilibrium reacting flows

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Liou, Meng-Sing

    1992-01-01

    An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.

  7. An Analysis of Diagram Modification and Construction in Students' Solutions to Applied Calculus Problems

    ERIC Educational Resources Information Center

    Bremigan, Elizabeth George

    2005-01-01

    In the study reported here, I examined the diagrams that mathematically capable high school students produced in solving applied calculus problems in which a diagram was provided in the problem statement. Analyses of the diagrams contained in written solutions to selected free-response problems from the 1996 BC level Advanced Placement Calculus…

  8. Algebra Word Problem Solving Approaches in a Chemistry Context: Equation Worked Examples versus Text Editing

    ERIC Educational Resources Information Center

    Ngu, Bing Hiong; Yeung, Alexander Seeshing

    2013-01-01

    Text editing directs students' attention to the problem structure as they classify whether the texts of word problems contain sufficient, missing or irrelevant information for working out a solution. Equation worked examples emphasize the formation of a coherent problem structure to generate a solution. Its focus is on the construction of three…

  9. Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.

    PubMed

    Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian

    2017-01-01

    Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.

  10. Using Predictor-Corrector Methods in Numerical Solutions to Mathematical Problems of Motion

    ERIC Educational Resources Information Center

    Lewis, Jerome

    2005-01-01

    In this paper, the author looks at some classic problems in mathematics that involve motion in the plane. Many case problems like these are difficult and beyond the mathematical skills of most undergraduates, but computational approaches often require less insight into the subtleties of the problems and can be used to obtain reliable solutions.…

  11. Element Verification and Comparison in Sierra/Solid Mechanics Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohashi, Yuki; Roth, William

    2016-05-01

    The goal of this project was to study the effects of element selection on the Sierra/SM solutions to five common solid mechanics problems. A total of nine element formulations were used for each problem. The models were run multiple times with varying spatial and temporal discretization in order to ensure convergence. The first four problems have been compared to analytical solutions, and all numerical results were found to be sufficiently accurate. The penetration problem was found to have a high mesh dependence in terms of element type, mesh discretization, and meshing scheme. Also, the time to solution is shown formore » each problem in order to facilitate element selection when computer resources are limited.« less

  12. Some new results on the central overlap problem in astrometry

    NASA Astrophysics Data System (ADS)

    Rapaport, M.

    1998-07-01

    The central overlap problem in astrometry has been revisited in the recent last years by Eichhorn (1988) who explicitly inverted the matrix of a constrained least squares problem. In this paper, the general explicit solution of the unconstrained central overlap problem is given. We also give the explicit solution for an other set of constraints; this result is a confirmation of a conjecture expressed by Eichhorn (1988). We also consider the use of iterative methods to solve the central overlap problem. A surprising result is obtained when the classical Gauss Seidel method is used; the iterations converge immediately to the general solution of the equations; we explain this property writing the central overlap problem in a new set of variables.

  13. Solution of a cauchy problem for a diffusion equation in a Hilbert space by a Feynman formula

    NASA Astrophysics Data System (ADS)

    Remizov, I. D.

    2012-07-01

    The Cauchy problem for a class of diffusion equations in a Hilbert space is studied. It is proved that the Cauchy problem in well posed in the class of uniform limits of infinitely smooth bounded cylindrical functions on the Hilbert space, and the solution is presented in the form of the so-called Feynman formula, i.e., a limit of multiple integrals against a gaussian measure as the multiplicity tends to infinity. It is also proved that the solution of the Cauchy problem depends continuously on the diffusion coefficient. A process reducing an approximate solution of an infinite-dimensional diffusion equation to finding a multiple integral of a real function of finitely many real variables is indicated.

  14. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  15. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  16. Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem

    NASA Astrophysics Data System (ADS)

    Auteri, F.; Quartapelle, L.; Vigevano, L.

    2002-08-01

    This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.

  17. Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns

    NASA Technical Reports Server (NTRS)

    Shaeffer, John

    2008-01-01

    Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.

  18. A temps nouveaux, solutions nouvelles: quelques propositions (New Times, New Solutions: Some Proposals).

    ERIC Educational Resources Information Center

    Capelle, Guy

    1983-01-01

    Serious problems in education in Latin America arising from political, economic, and social change periodically put in question the status, objectives, and manner of French second-language instruction. A number of solutions to general and specific pedagogical problems are proposed. (MSE)

  19. Helping the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection through the organization of a pilot health care provider research system.

    PubMed

    Tang, Liyang

    2013-04-04

    The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.

  20. Quantum solution to a class of two-party private summation problems

    NASA Astrophysics Data System (ADS)

    Shi, Run-Hua; Zhang, Shun

    2017-09-01

    In this paper, we define a class of special two-party private summation (S2PPS) problems and present a common quantum solution to S2PPS problems. Compared to related classical solutions, our solution has advantages of higher security and lower communication complexity, and especially it can ensure the fairness of two parties without the help of a third party. Furthermore, we investigate the practical applications of our proposed S2PPS protocol in many privacy-preserving settings with big data sets, including private similarity decision, anonymous authentication, social networks, secure trade negotiation, secure data mining.

Top