Sample records for global space-group optimization

  1. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  2. SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry

    NASA Astrophysics Data System (ADS)

    Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.

    2007-03-01

    A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.

  3. Collaboration space division in collaborative product development based on a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Qian, Xueming; Ma, Yanqiao; Feng, Huan

    2018-02-01

    The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.

  4. Optimizing Societal Benefit using a Systems Engineering Approach for Implementation of the GEOSS Space Segment

    NASA Technical Reports Server (NTRS)

    Killough, Brian D., Jr.; Sandford, Stephen P.; Cecil, L DeWayne; Stover, Shelley; Keith, Kim

    2008-01-01

    The Group on Earth Observations (GEO) is driving a paradigm shift in the Earth Observation community, refocusing Earth observing systems on GEO Societal Benefit Areas (SBA). Over the short history of space-based Earth observing systems most decisions have been made based on improving our scientific understanding of the Earth with the implicit assumption that this would serve society well in the long run. The space agencies responsible for developing the satellites used for global Earth observations are typically science driven. The innovation of GEO is the call for investments by space agencies to be driven by global societal needs. This paper presents the preliminary findings of an analysis focused on the observational requirements of the GEO Energy SBA. The analysis was performed by the Committee on Earth Observation Satellites (CEOS) Systems Engineering Office (SEO) which is responsible for facilitating the development of implementation plans that have the maximum potential for success while optimizing the benefit to society. The analysis utilizes a new taxonomy for organizing requirements, assesses the current gaps in spacebased measurements and missions, assesses the impact of the current and planned space-based missions, and presents a set of recommendations.

  5. Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2014-10-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.

  6. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  7. Method for using global optimization to the estimation of surface-consistent residual statics

    DOEpatents

    Reister, David B.; Barhen, Jacob; Oblow, Edward M.

    2001-01-01

    An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

  8. An efficient and practical approach to obtain a better optimum solution for structural optimization

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Yu; Huang, Jyun-Hao

    2013-08-01

    For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.

  9. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  10. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  11. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  12. Strategies for global optimization in photonics design.

    PubMed

    Vukovic, Ana; Sewell, Phillip; Benson, Trevor M

    2010-10-01

    This paper reports on two important issues that arise in the context of the global optimization of photonic components where large problem spaces must be investigated. The first is the implementation of a fast simulation method and associated matrix solver for assessing particular designs and the second, the strategies that a designer can adopt to control the size of the problem design space to reduce runtimes without compromising the convergence of the global optimization tool. For this study an analytical simulation method based on Mie scattering and a fast matrix solver exploiting the fast multipole method are combined with genetic algorithms (GAs). The impact of the approximations of the simulation method on the accuracy and runtime of individual design assessments and the consequent effects on the GA are also examined. An investigation of optimization strategies for controlling the design space size is conducted on two illustrative examples, namely, 60° and 90° waveguide bends based on photonic microstructures, and their effectiveness is analyzed in terms of a GA's ability to converge to the best solution within an acceptable timeframe. Finally, the paper describes some particular optimized solutions found in the course of this work.

  13. Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2015-09-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their definitions of global coverages intended to ensure the needs of major global and international organizations (UNFCCC and IPCC) are met as a core objective. Consider how new optimization tools like rule-based engines (RBES) offer alternative methods of evaluating collaborative architectures and constellations? What would the trade space of optimized operational climate monitoring architectures of ECV look like? Third, using the RBES tool kit (2014) demonstrate with application to a climate centric rule-based decision engine - optimizing architectural trades of earth observation satellite systems, allowing comparison(s) to existing architectures and gaining insights for global collaborative architectures. How difficult is it to pull together an optimized climate case study - utilizing for example 12 climate based instruments on multiple existing platforms and nominal handful of orbits; for best cost and performance benefits against the collection requirements of representative set of ECV. How much effort and resources would an organization expect to invest to realize these analysis and utility benefits?

  14. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  15. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  16. Establishing a Modern Ground Network for Space Geodesy Applications

    NASA Technical Reports Server (NTRS)

    Pearlman, M.; Pavlis, E.; Altamimi, Z.; Noll, C.

    2010-01-01

    Ground-based networks of co-located space-geodesy techniques (VLBI, SLR, GLASS, DORIS) are the basis for the development and maintenance of the :International Terrestrial deference Frame (ITRE), which is the basis for our metric measurements of global change. The Global Geodetic Observing System (GGOS) within the International Association of Geodesy has established a task to develop a strategy to design, integrate and maintain the fundamental geodetic network and supporting infrastructure in a sustainable way to satisfy the long-term requirements for the reference frame. The GGOS goal is an origin definition at I mm or better and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale and orientation components. These goals are based on scientific requirements to address sea level rise with confidence. As a first step, simulations focused on establishing the optimal global SLR and VLBI network, since these two techniques alone are sufficient to define the reference frame. The GLASS constellations will then distribute the reference frame to users anywhere on the Earth. Using simulated data to be collected by the future networks, we investigated various designs and the resulting accuracy in the origin, scale and orientation of the resulting ITRF. We present here the results of extensive simulation studies aimed at designing optimal global geodetic networks to support GGOS science products. Current estimates are the network will require 24 - 32 globally distributed co-location sites. Stations in the near global network will require geologically stable sites witla good weather, established infrastructure, and local support and personnel. EGOS will seek groups that are interested in participation. GGOS intends to issues a Call for Participation of groups that would like to take part in the network implementation and operation_ Some examples of integrated stations currently in operation or under development will be presented. We will examine necessary conditions and challenges in designing a co-location station.

  17. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  18. Shape Optimization of Supersonic Turbines Using Response Surface and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Papila, Nilay; Shyy, Wei; Griffin, Lisa W.; Dorney, Daniel J.

    2001-01-01

    Turbine performance directly affects engine specific impulse, thrust-to-weight ratio, and cost in a rocket propulsion system. A global optimization framework combining the radial basis neural network (RBNN) and the polynomial-based response surface method (RSM) is constructed for shape optimization of a supersonic turbine. Based on the optimized preliminary design, shape optimization is performed for the first vane and blade of a 2-stage supersonic turbine, involving O(10) design variables. The design of experiment approach is adopted to reduce the data size needed by the optimization task. It is demonstrated that a major merit of the global optimization approach is that it enables one to adaptively revise the design space to perform multiple optimization cycles. This benefit is realized when an optimal design approaches the boundary of a pre-defined design space. Furthermore, by inspecting the influence of each design variable, one can also gain insight into the existence of multiple design choices and select the optimum design based on other factors such as stress and materials considerations.

  19. Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.

    2016-01-01

    The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allowing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for computational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.

  20. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  1. A multilevel control system for the large space telescope. [numerical analysis/optimal control

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.

    1975-01-01

    A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.

  2. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  3. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  4. Global Optimization Ensemble Model for Classification Methods

    PubMed Central

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  5. Optimum Multi-Impulse Rendezvous Program

    NASA Technical Reports Server (NTRS)

    Glandorf, D. R.; Onley, A. G.; Rozendaal, H. L.

    1970-01-01

    OMIRPROGRAM determines optimal n-impulse rendezvous trajectories under the restrictions of two-body motion in free space. Lawden's primer vector theory is applied to determine optimum number of midcourse impulse applications. Global optimality is not guaranteed.

  6. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  7. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE PAGES

    Graf, Peter A.; Billups, Stephen

    2017-07-24

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  8. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Billups, Stephen

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  9. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    PubMed Central

    Cao, Leilei; Xu, Lihong; Goodman, Erik D.

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  10. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.

    PubMed

    Cao, Leilei; Xu, Lihong; Goodman, Erik D

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.

  11. Petascale Diagnostic Assessment of the Global Portfolio Rainfall Space Missions' Ability to Support Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Chaney, N.; Herman, J. D.; Wood, E. F.; Ferringer, M. P.

    2015-12-01

    This research represents a multi-institutional collaboration between Cornell University, The Aerospace Corporation, and Princeton University that has completed a Petascale diagnostic assessment of the current 10 satellite missions providing rainfall observations. Our diagnostic assessment has required four core tasks: (1) formally linking high-resolution astrodynamics design and coordination of space assets with their global hydrological impacts within a Petascale "many-objective" global optimization framework, (2) developing a baseline diagnostic evaluation of a 1-degree resolution global implementation of the Variable Infiltration Capacity (VIC) model to establish the required satellite observation frequencies and coverage to maintain acceptable global flood forecasts, (3) evaluating the limitations and vulnerabilities of the full suite of current satellite precipitation missions including the recently approved Global Precipitation Measurement (GPM) mission, and (4) conceptualizing the next generation spaced-based platforms for water cycle observation. Our team exploited over 100 Million hours of computing access on the 700,000+ core Blue Waters machine to radically advance our ability to discover and visualize key system tradeoffs and sensitivities. This project represents to our knowledge the first attempt to develop a 10,000 member Monte Carlo global hydrologic simulation at one degree resolution that characterizes the uncertain effects of changing the available frequencies of satellite precipitation on drought and flood forecasts. The simulation—optimization components of the work have set a theoretical baseline for the best possible frequencies and coverages for global precipitation given unlimited investment, broad international coordination in reconfiguring existing assets, and new satellite constellation design objectives informed directly by key global hydrologic forecasting requirements. Our research poses a step towards realizing the integrated global water cycle observatory long sought by the World Climate Research Programme, which has to date eluded the world's space agencies.

  12. DEGAS: Dynamic Exascale Global Address Space Programming Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less

  13. Optimal Design of Grid-Stiffened Panels and Shells With Variable Curvature

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Jaunky, Navin

    2001-01-01

    A design strategy for optimal design of composite grid-stiffened structures with variable curvature subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. Stiffening configuration is herein defined as a design variable that indicates the combination of axial, transverse and diagonal stiffeners in the stiffened panel. The design optimization process is adapted to identify the lightest-weight stiffening configuration and stiffener spacing for grid-stiffened composite panels given the overall panel dimensions. in-plane design loads, material properties. and boundary conditions of the grid-stiffened panel or shell.

  14. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James L.

    2010-01-01

    NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.

  15. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  16. Global optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  17. A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation

    NASA Astrophysics Data System (ADS)

    Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava

    2015-12-01

    In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.

  18. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1992-01-01

    A general optimization-based method for the design of large space platforms through integration of the disciplines of structural dynamics and control is presented. The method uses the global sensitivity equations approach and is especially appropriate for preliminary design problems in which the structural and control analyses are tightly coupled. The method is capable of coordinating general purpose structural analysis, multivariable control, and optimization codes, and thus, can be adapted to a variety of controls-structures integrated design projects. The method is used to minimize the total weight of a space platform while maintaining a specified vibration decay rate after slewing maneuvers.

  19. Inverse design of bulk morphologies in block copolymers using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn

    Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.

  20. Dissipative structure and global existence in critical space for Timoshenko system of memory type

    NASA Astrophysics Data System (ADS)

    Mori, Naofumi

    2018-08-01

    In this paper, we consider the initial value problem for the Timoshenko system with a memory term in one dimensional whole space. In the first place, we consider the linearized system: applying the energy method in the Fourier space, we derive the pointwise estimate of the solution in the Fourier space, which first gives the optimal decay estimate of the solution. Next, we give a characterization of the dissipative structure of the system by using the spectral analysis, which confirms our pointwise estimate is optimal. In the second place, we consider the nonlinear system: we show that the global-in-time existence and uniqueness result could be proved in the minimal regularity assumption in the critical Sobolev space H2. In the proof we don't need any time-weighted norm as recent works; we use just an energy method, which is improved to overcome the difficulties caused by regularity-loss property of Timoshenko system.

  1. A variant of special relativity and long-distance astronomy.

    PubMed

    Segal, I E

    1974-03-01

    THE REDSHIFT, MICROWAVE BACKGROUND, AND OTHER OBSERVABLE ASTRONOMICAL FEATURES ARE DEDUCED FROM TWO THEORETICAL ASSUMPTIONS: (1) global space-time is a certain variant of Minkowski space, locally indistinguishable in causality and covariance features but globally admitting the full conformal group as symmetries although having a spherical space component; (2) the true energy operator corresponds to a certain generator of this group which is not globally scale-covariant, whereas laboratory frequency measurements are inevitably such and correspond to the conventional energy operator [unk]/i[unk]/[unk]t.

  2. Optimal Design of Grid-Stiffened Composite Panels Using Global and Local Buckling Analysis

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Jaunky, Navin; Knight, Norman F., Jr.

    1996-01-01

    A design strategy for optimal design of composite grid-stiffened panels subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. The design optimization process is adapted to identify the lightest-weight stiffening configuration and pattern for grid stiffened composite panels given the overall panel dimensions, design in-plane loads, material properties, and boundary conditions of the grid-stiffened panel.

  3. Optimal Design of General Stiffened Composite Circular Cylinders for Global Buckling with Strength Constraints

    NASA Technical Reports Server (NTRS)

    Jaunky, N.; Ambur, D. R.; Knight, N. F., Jr.

    1998-01-01

    A design strategy for optimal design of composite grid-stiffened cylinders subjected to global and local buckling constraints and strength constraints was developed using a discrete optimizer based on a genetic algorithm. An improved smeared stiffener theory was used for the global analysis. Local buckling of skin segments were assessed using a Rayleigh-Ritz method that accounts for material anisotropy. The local buckling of stiffener segments were also assessed. Constraints on the axial membrane strain in the skin and stiffener segments were imposed to include strength criteria in the grid-stiffened cylinder design. Design variables used in this study were the axial and transverse stiffener spacings, stiffener height and thickness, skin laminate stacking sequence and stiffening configuration, where stiffening configuration is a design variable that indicates the combination of axial, transverse and diagonal stiffener in the grid-stiffened cylinder. The design optimization process was adapted to identify the best suited stiffening configurations and stiffener spacings for grid-stiffened composite cylinder with the length and radius of the cylinder, the design in-plane loads and material properties as inputs. The effect of having axial membrane strain constraints in the skin and stiffener segments in the optimization process is also studied for selected stiffening configurations.

  4. Optimal Design of General Stiffened Composite Circular Cylinders for Global Buckling with Strength Constraints

    NASA Technical Reports Server (NTRS)

    Jaunky, Navin; Knight, Norman F., Jr.; Ambur, Damodar R.

    1998-01-01

    A design strategy for optimal design of composite grid-stiffened cylinders subjected to global and local buckling constraints and, strength constraints is developed using a discrete optimizer based on a genetic algorithm. An improved smeared stiffener theory is used for the global analysis. Local buckling of skin segments are assessed using a Rayleigh-Ritz method that accounts for material anisotropy. The local buckling of stiffener segments are also assessed. Constraints on the axial membrane strain in the skin and stiffener segments are imposed to include strength criteria in the grid-stiffened cylinder design. Design variables used in this study are the axial and transverse stiffener spacings, stiffener height and thickness, skin laminate stacking sequence, and stiffening configuration, where herein stiffening configuration is a design variable that indicates the combination of axial, transverse, and diagonal stiffener in the grid-stiffened cylinder. The design optimization process is adapted to identify the best suited stiffening configurations and stiffener spacings for grid-stiffened composite cylinder with the length and radius of the cylinder, the design in-plane loads, and material properties as inputs. The effect of having axial membrane strain constraints in the skin and stiffener segments in the optimization process is also studied for selected stiffening configuration.

  5. Global, Multi-Objective Trajectory Optimization With Parametric Spreading

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Englander, Jacob A.; Phillips, Sean M.; Hughes, Kyle M.

    2017-01-01

    Mission design problems are often characterized by multiple, competing trajectory optimization objectives. Recent multi-objective trajectory optimization formulations enable generation of globally-optimal, Pareto solutions via a multi-objective genetic algorithm. A byproduct of these formulations is that clustering in design space can occur in evolving the population towards the Pareto front. This clustering can be a drawback, however, if parametric evaluations of design variables are desired. This effort addresses clustering by incorporating operators that encourage a uniform spread over specified design variables while maintaining Pareto front representation. The algorithm is demonstrated on a Neptune orbiter mission, and enhanced multidimensional visualization strategies are presented.

  6. GPU color space conversion

    NASA Astrophysics Data System (ADS)

    Chase, Patrick; Vondran, Gary

    2011-01-01

    Tetrahedral interpolation is commonly used to implement continuous color space conversions from sparse 3D and 4D lookup tables. We investigate the implementation and optimization of tetrahedral interpolation algorithms for GPUs, and compare to the best known CPU implementations as well as to a well known GPU-based trilinear implementation. We show that a 500 NVIDIA GTX-580 GPU is 3x faster than a 1000 Intel Core i7 980X CPU for 3D interpolation, and 9x faster for 4D interpolation. Performance-relevant GPU attributes are explored including thread scheduling, local memory characteristics, global memory hierarchy, and cache behaviors. We consider existing tetrahedral interpolation algorithms and tune based on the structure and branching capabilities of current GPUs. Global memory performance is improved by reordering and expanding the lookup table to ensure optimal access behaviors. Per multiprocessor local memory is exploited to implement optimally coalesced global memory accesses, and local memory addressing is optimized to minimize bank conflicts. We explore the impacts of lookup table density upon computation and memory access costs. Also presented are CPU-based 3D and 4D interpolators, using SSE vector operations that are faster than any previously published solution.

  7. A global analysis of adaptive evolution of operons in cyanobacteria.

    PubMed

    Memon, Danish; Singh, Abhay K; Pakrasi, Himadri B; Wangikar, Pramod P

    2013-02-01

    Operons are an important feature of prokaryotic genomes. Evolution of operons is hypothesized to be adaptive and has contributed significantly towards coordinated optimization of functions. Two conflicting theories, based on (i) in situ formation to achieve co-regulation and (ii) horizontal gene transfer of functionally linked gene clusters, are generally considered to explain why and how operons have evolved. Furthermore, effects of operon evolution on genomic traits such as intergenic spacing, operon size and co-regulation are relatively less explored. Based on the conservation level in a set of diverse prokaryotes, we categorize the operonic gene pair associations and in turn the operons as ancient and recently formed. This allowed us to perform a detailed analysis of operonic structure in cyanobacteria, a morphologically and physiologically diverse group of photoautotrophs. Clustering based on operon conservation showed significant similarity with the 16S rRNA-based phylogeny, which groups the cyanobacterial strains into three clades. Clade C, dominated by strains that are believed to have undergone genome reduction, shows a larger fraction of operonic genes that are tightly packed in larger sized operons. Ancient operons are in general larger, more tightly packed, better optimized for co-regulation and part of key cellular processes. A sub-clade within Clade B, which includes Synechocystis sp. PCC 6803, shows a reverse trend in intergenic spacing. Our results suggest that while in situ formation and vertical descent may be a dominant mechanism of operon evolution in cyanobacteria, optimization of intergenic spacing and co-regulation are part of an ongoing process in the life-cycle of operons.

  8. The Global Geodetic Observing System: Space Geodesy Networks for the Future

    NASA Technical Reports Server (NTRS)

    Pearlman, Michael; Pavlis, Erricos; Ma, Chopo; Altamini, Zuheir; Noll, Carey; Stowers, David

    2011-01-01

    Ground-based networks of co-located space geodetic techniques (VLBI, SLR, GNSS. and DORIS) are the basis for the development and maintenance of the International Terrestrial Reference frame (ITRF), which is our metric of reference for measurements of global change, The Global Geodetic Observing System (GGOS) of the International Association of Geodesy (IAG) has established a task to develop a strategy to design, integrate and maintain the fundamental geodetic network and supporting infrastructure in a sustainable way to satisfy the long-term requirements for the reference frame. The GGOS goal is an origin definition at 1 mm or better and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale and orientation components. These goals are based on scientific requirements to address sea level rise with confidence, but other applications are not far behind. Recent studies including one by the US National Research Council has strongly stated the need and the urgency for the fundamental space geodesy network. Simulations are underway to examining accuracies for origin, scale and orientation of the resulting ITRF based on various network designs and system performance to determine the optimal global network to achieve this goal. To date these simulations indicate that 24 - 32 co-located stations are adequate to define the reference frame and a more dense GNSS and DORIS network will be required to distribute the reference frame to users anywhere on Earth. Stations in the new global network will require geologically stable sites with good weather, established infrastructure, and local support and personnel. GGOS wil seek groups that are interested in participation. GGOS intends to issues a Call for Participation of groups that would like to contribute in the network implementation and operation. Some examples of integrated stations currently in operation or under development will be presented. We will examine necessary conditions and challenges in designing a co-location station.

  9. APPLICATION OF A BIP CONSTRAINED OPTIMIZATION MODEL COMBINED WITH NASA's ATLAS MODEL TO OPTIMIZE THE SOCIETAL BENEFITS OF THE USA's INTERNATIONAL SPACE EXPLORATION AND UTILIZATION INITIATIVE OF 1/14/04

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.; Glover, Fred W.; Woodcock, Gordon R.; Laguna, Manuel

    2005-01-01

    The 1/14/04 USA Space Exploratiofltilization Initiative invites all Space-faring Nations, all Space User Groups in Science, Space Entrepreneuring, Advocates of Robotic and Human Space Exploration, Space Tourism and Colonization Promoters, etc., to join an International Space Partnership. With more Space-faring Nations and Space User Groups each year, such a Partnership would require Multi-year (35 yr.-45 yr.) Space Mission Planning. With each Nation and Space User Group demanding priority for its missions, one needs a methodology for obiectively selecting the best mission sequences to be added annually to this 45 yr. Moving Space Mission Plan. How can this be done? Planners have suggested building a Reusable, Sustainable, Space Transportation Infrastructure (RSSn) to increase Mission synergism, reduce cost, and increase scientific and societal returns from this Space Initiative. Morgenthaler and Woodcock presented a Paper at the 55th IAC, Vancouver B.C., Canada, entitled Constrained Optimization Models For Optimizing Multi - Year Space Programs. This Paper showed that a Binary Integer Programming (BIP) Constrained Optimization Model combined with the NASA ATLAS Cost and Space System Operational Parameter Estimating Model has the theoretical capability to solve such problems. IAA Commission III, Space Technology and Space System Development, in its ACADEMY DAY meeting at Vancouver, requested that the Authors and NASA experts find several Space Exploration Architectures (SEAS), apply the combined BIP/ATLAS Models, and report the results at the 56th Fukuoka IAC. While the mathematical Model is in Ref.[2] this Paper presents the Application saga of that effort.

  10. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  11. A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.

    PubMed

    Zhang, Geng; Li, Yangmin

    2016-06-01

    It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems.

  12. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Weihong; Sun, Kai; Qi, Junjian

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less

  13. Global invariants of paths and curves for the group of all linear similarities in the two-dimensional Euclidean space

    NASA Astrophysics Data System (ADS)

    Khadjiev, Djavvat; Ören, Idri˙s; Pekşen, Ömer

    Let E2 be the 2-dimensional Euclidean space, LSim(2) be the group of all linear similarities of E2 and LSim+(2) be the group of all orientation-preserving linear similarities of E2. The present paper is devoted to solutions of problems of global G-equivalence of paths and curves in E2 for the groups G = LSim(2),LSim+(2). Complete systems of global G-invariants of a path and a curve in E2 are obtained. Existence and uniqueness theorems are given. Evident forms of a path and a curve with the given global invariants are obtained.

  14. Final Report from The University of Texas at Austin for DEGAS: Dynamic Global Address Space programming environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erez, Mattan; Yelick, Katherine; Sarkar, Vivek

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. Our approach is to provide an efficient and scalable programming model that can be adapted to application needs through the use of dynamic runtime features and domain-specific languages for computational kernels. We address the following technical challenges: Programmability: Rich set of programming constructs based on a Hierarchical Partitioned Global Address Space (HPGAS) model, demonstrated in UPC++. Scalability: Hierarchical locality control, lightweight communication (extended GASNet), and ef- ficient synchronization mechanisms (Phasers). Performance Portability:more » Just-in-time specialization (SEJITS) for generating hardware-specific code and scheduling libraries for domain-specific adaptive runtimes (Habanero). Energy Efficiency: Communication-optimal code generation to optimize energy efficiency by re- ducing data movement. Resilience: Containment Domains for flexible, domain-specific resilience, using state capture mechanisms and lightweight, asynchronous recovery mechanisms. Interoperability: Runtime and language interoperability with MPI and OpenMP to encourage broad adoption.« less

  15. Optimal Foraging in Semantic Memory

    ERIC Educational Resources Information Center

    Hills, Thomas T.; Jones, Michael N.; Todd, Peter M.

    2012-01-01

    Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared…

  16. Optimized decoy state QKD for underwater free space communication

    NASA Astrophysics Data System (ADS)

    Lopes, Minal; Sarwade, Nisha

    Quantum cryptography (QC) is envisioned as a solution for global key distribution through fiber optic, free space and underwater optical communication due to its unconditional security. In view of this, this paper investigates underwater free space quantum key distribution (QKD) model for enhanced transmission distance, secret key rates and security. It is reported that secure underwater free space QKD is feasible in the clearest ocean water with the sifted key rates up to 207kbps. This paper extends this work by testing performance of optimized decoy state QKD protocol with underwater free space communication model. The attenuation of photons, quantum bit error rate and the sifted key generation rate of underwater quantum communication is obtained with vector radiative transfer theory and Monte Carlo method. It is observed from the simulations that optimized decoy state QKD evidently enhances the underwater secret key transmission distance as well as secret key rates.

  17. Current trends in satellite based emergency mapping - the need for harmonisation

    NASA Astrophysics Data System (ADS)

    Voigt, Stefan

    2013-04-01

    During the past years, the availability and use of satellite image data to support disaster management and humanitarian relief organisations has largely increased. The automation and data processing techniques are greatly improving as well as the capacity in accessing and processing satellite imagery in getting better globally. More and more global activities via the internet and through global organisations like the United Nations or the International Charter Space and Major Disaster engage in the topic, while at the same time, more and more national or local centres engage rapid mapping operations and activities. In order to make even more effective use of this very positive increase of capacity, for the sake of operational provision of analysis results, for fast validation of satellite derived damage assessments, for better cooperation in the joint inter agency generation of rapid mapping products and for general scientific use, rapid mapping results in general need to be better harmonized, if not even standardized. In this presentation, experiences from various years of rapid mapping gained by the DLR Center for satellite based Crisis Information (ZKI) within the context of the national activities, the International Charter Space and Major Disasters, GMES/Copernicus etc. are reported. Furthermore, an overview on how automation, quality assurance and optimization can be achieved through standard operation procedures within a rapid mapping workflow is given. Building on this long term rapid mapping experience, and building on the DLR initiative to set in pace an "International Working Group on Satellite Based Emergency Mapping" current trends in rapid mapping are discussed and thoughts on how the sharing of rapid mapping information can be optimized by harmonizing analysis results and data structures are presented. Such an harmonization of analysis procedures, nomenclatures and representations of data as well as meta data are the basis to better cooperate within the global rapid mapping community throughout local/national, regional/supranational and global scales

  18. A trust region-based approach to optimize triple response systems

    NASA Astrophysics Data System (ADS)

    Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen

    2014-05-01

    This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.

  19. The CEOS Atmospheric Composition Constellation: Enhancing the Value of Space-Based Observations

    NASA Technical Reports Server (NTRS)

    Eckman, Richard; Zehner, Claus; Al-Saadi, Jay

    2015-01-01

    The Committee on Earth Observation Satellites (CEOS) coordinates civil space-borne observations of the Earth. Participating agencies strive to enhance international coordination and data exchange and to optimize societal benefit. In recent years, CEOS has collaborated closely with the Group on Earth Observations (GEO) in implementing the Global Earth Observing System of Systems (GEOSS) space-based objectives. The goal of the CEOS Atmospheric Composition Constellation (ACC) is to collect and deliver data to improve monitoring, assessment and predictive capabilities for changes in the ozone layer, air quality and climate forcing associated with changes in the environment through coordination of existing and future international space assets. A project to coordinate and enhance the science value of a future constellation of geostationary sensors measuring parameters relevant to air quality supports the forthcoming European Sentinel-4, Korean GEMS, and US TEMPO missions. Recommendations have been developed for harmonization to mutually improve data quality and facilitate widespread use of the data products.

  20. Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints

    NASA Technical Reports Server (NTRS)

    Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren

    2015-01-01

    Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.

  1. The Powell Volcano Remote Sensing Working Group Overview

    NASA Astrophysics Data System (ADS)

    Reath, K.; Pritchard, M. E.; Poland, M. P.; Wessels, R. L.; Biggs, J.; Carn, S. A.; Griswold, J. P.; Ogburn, S. E.; Wright, R.; Lundgren, P.; Andrews, B. J.; Wauthier, C.; Lopez, T.; Vaughan, R. G.; Rumpf, M. E.; Webley, P. W.; Loughlin, S.; Meyer, F. J.; Pavolonis, M. J.

    2017-12-01

    Hazards from volcanic eruptions pose risks to the lives and livelihood of local populations, with potential global impacts to businesses, agriculture, and air travel. The 2015 Global Assessment of Risk report notes that 800 million people are estimated to live within 100 km of 1400 subaerial volcanoes identified as having eruption potential. However, only 55% of these volcanoes have any type of ground-based monitoring. The only methods currently available to monitor these unmonitored volcanoes are space-based systems that provide a global view. However, with the explosion of data techniques and sensors currently available, taking full advantage of these resources can be challenging. The USGS Powell Center Volcano Remote Sensing Working Group is working with many partners to optimize satellite resources for global detection of volcanic unrest and assessment of potential eruption hazards. In this presentation we will describe our efforts to: 1) work with space agencies to target acquisitions from the international constellation of satellites to collect the right types of data at volcanoes with forecasting potential; 2) collaborate with the scientific community to develop databases of remotely acquired observations of volcanic thermal, degassing, and deformation signals to facilitate change detection and assess how these changes are (or are not) related to eruption; and 3) improve usage of satellite observations by end users at volcano observatories that report to their respective governments. Currently, the group has developed time series plots for 48 Latin American volcanoes that incorporate variations in thermal, degassing, and deformation readings over time. These are compared against eruption timing and ground-based data provided by the Smithsonian Institute Global Volcanism Program. Distinct patterns in unrest and eruption are observed at different volcanoes, illustrating the difficulty in developing generalizations, but highlighting the power of remote sensing to better understand each volcano's behavior. To share these results with end users, the group is developing a communication tool that would allow researchers to share information relating to specific volcanoes or regions, although it is currently under development as we work to determine the clearest lines of communication.

  2. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    NASA Astrophysics Data System (ADS)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  3. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    NASA Astrophysics Data System (ADS)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

  4. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James

    2014-01-01

    NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.

  5. An Automatic Medium to High Fidelity Low-Thrust Global Trajectory Toolchain; EMTG-GMAT

    NASA Technical Reports Server (NTRS)

    Beeson, Ryne T.; Englander, Jacob A.; Hughes, Steven P.; Schadegg, Maximillian

    2015-01-01

    Solving the global optimization, low-thrust, multiple-flyby interplanetary trajectory problem with high-fidelity dynamical models requires an unreasonable amount of computational resources. A better approach, and one that is demonstrated in this paper, is a multi-step process whereby the solution of the aforementioned problem is solved at a lower-fidelity and this solution is used as an initial guess for a higher-fidelity solver. The framework presented in this work uses two tools developed by NASA Goddard Space Flight Center: the Evolutionary Mission Trajectory Generator (EMTG) and the General Mission Analysis Tool (GMAT). EMTG is a medium to medium-high fidelity low-thrust interplanetary global optimization solver, which now has the capability to automatically generate GMAT script files for seeding a high-fidelity solution using GMAT's local optimization capabilities. A discussion of the dynamical models as well as thruster and power modeling for both EMTG and GMAT are given in this paper. Current capabilities are demonstrated with examples that highlight the toolchains ability to efficiently solve the difficult low-thrust global optimization problem with little human intervention.

  6. Three-dimensional desirability spaces for quality-by-design-based HPLC development.

    PubMed

    Mokhtar, Hatem I; Abdel-Salam, Randa A; Hadad, Ghada M

    2015-04-01

    In this study, three-dimensional desirability spaces were introduced as a graphical representation method of design space. This was illustrated in the context of application of quality-by-design concepts on development of a stability indicating gradient reversed-phase high-performance liquid chromatography method for the determination of vinpocetine and α-tocopheryl acetate in a capsule dosage form. A mechanistic retention model to optimize gradient time, initial organic solvent concentration and ternary solvent ratio was constructed for each compound from six experimental runs. Then, desirability function of each optimized criterion and subsequently the global desirability function were calculated throughout the knowledge space. The three-dimensional desirability spaces were plotted as zones exceeding a threshold value of desirability index in space defined by the three optimized method parameters. Probabilistic mapping of desirability index aided selection of design space within the potential desirability subspaces. Three-dimensional desirability spaces offered better visualization and potential design spaces for the method as a function of three method parameters with ability to assign priorities to this critical quality as compared with the corresponding resolution spaces. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Coordinated and uncoordinated optimization of networks

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-06-01

    In this paper, we consider spatial networks that realize a balance between an infrastructure cost (the cost of wire needed to connect the network in space) and communication efficiency, measured by average shortest path length. A global optimization procedure yields network topologies in which this balance is optimized. These are compared with network topologies generated by a competitive process in which each node strives to optimize its own cost-communication balance. Three phases are observed in globally optimal configurations for different cost-communication trade offs: (i) regular small worlds, (ii) starlike networks, and (iii) trees with a center of interconnected hubs. In the latter regime, i.e., for very expensive wire, power laws in the link length distributions P(w)∝w-α are found, which can be explained by a hierarchical organization of the networks. In contrast, in the local optimization process the presence of sharp transitions between different network regimes depends on the dimension of the underlying space. Whereas for d=∞ sharp transitions between fully connected networks, regular small worlds, and highly cliquish periphery-core networks are found, for d=1 sharp transitions are absent and the power law behavior in the link length distribution persists over a much wider range of link cost parameters. The measured power law exponents are in agreement with the hypothesis that the locally optimized networks consist of multiple overlapping suboptimal hierarchical trees.

  8. Minimum deltaV Burn Planning for the International Space Station Using a Hybrid Optimization Technique, Level 1

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2015-01-01

    The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic algorithm in Level 2 to select the number of burns and their TIGs for the next generation. In this manner, the two levels solve their respective sub-problems separately but collaboratively until a burn solution is found that globally minimizes the deltaV across the entire trajectory. Feasible solutions can also be found by simply using the SQP algorithm in Level 1 with a zero cost function. This paper discusses the formulation of the Level 1 sub-problem and the development of a prototype software tool to solve it. The Level 2 sub-problem will be discussed in a future work. Following the Level 1 formulation and solution, several look-ahead trajectory examples for the ISS are explored. In each case, the burn targeting results using the current process are compared against a feasible solution found using Level 1 in the proposed technique. Level 1 is then used to find a minimum deltaV solution given the fixed number of burns and burn TIGs. The optimal solution is compared with the previously found feasible solution to determine the deltaV (and therefore propellant) savings. The proposed technique seeks to both improve the current process for targeting ISS burns, and to add the capability to optimize ISS burns in a novel fashion. The optimal solutions found using this technique can potentially save hundreds of kilograms of propellant over the course of the ISS mission compared to feasible solutions alone. While the software tool being developed to implement this technique is specific to ISS, the concept is extensible to other long-duration, central-body orbiting missions that must perform orbit maintenance burns to meet operational trajectory constraints.

  9. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.

    PubMed

    Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt

    2008-07-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.

  10. Use of an Atrial Lead with Very Short Tip-To-Ring Spacing Avoids Oversensing of Far-Field R-Wave

    PubMed Central

    Kolb, Christof; Nölker, Georg; Lennerz, Carsten; Jetter, Hansmartin; Semmler, Verena; Pürner, Klaus; Gutleben, Klaus-Jürgen; Reents, Tilko; Lang, Klaus; Lotze, Ulrich

    2012-01-01

    Objective The AVOID-FFS (Avoidance of Far-Field R-wave Sensing) study aimed to investigate whether an atrial lead with a very short tip-to-ring spacing without optimization of pacemaker settings shows equally low incidence of far-field R-wave sensing (FFS) when compared to a conventional atrial lead in combination with optimization of the programming. Methods Patients receiving a dual chamber pacemaker were randomly assigned to receive an atrial lead with a tip-to-ring spacing of 1.1 mm or a lead with a conventional tip-to-ring spacing of 10 mm. Postventricular atrial blanking (PVAB) was programmed to the shortest possible value of 60 ms in the study group, and to an individually determined optimized value in the control group. Atrial sensing threshold was programmed to 0.3 mV in both groups. False positive mode switch caused by FFS was evaluated at one and three months post implantation. Results A total of 204 patients (121 male; age 73±10 years) were included in the study. False positive mode switch caused by FFS was detected in one (1%) patient of the study group and two (2%) patients of the control group (p = 0.62). Conclusion The use of an atrial electrode with a very short tip-to-ring spacing avoids inappropriate mode switch caused by FFS without the need for individual PVAB optimization. Trial Registration ClinicalTrials.gov NCT00512915 PMID:22745661

  11. Use of an atrial lead with very short tip-to-ring spacing avoids oversensing of far-field R-wave.

    PubMed

    Kolb, Christof; Nölker, Georg; Lennerz, Carsten; Jetter, Hansmartin; Semmler, Verena; Pürner, Klaus; Gutleben, Klaus-Jürgen; Reents, Tilko; Lang, Klaus; Lotze, Ulrich

    2012-01-01

    The AVOID-FFS (Avoidance of Far-Field R-wave Sensing) study aimed to investigate whether an atrial lead with a very short tip-to-ring spacing without optimization of pacemaker settings shows equally low incidence of far-field R-wave sensing (FFS) when compared to a conventional atrial lead in combination with optimization of the programming. Patients receiving a dual chamber pacemaker were randomly assigned to receive an atrial lead with a tip-to-ring spacing of 1.1 mm or a lead with a conventional tip-to-ring spacing of 10 mm. Postventricular atrial blanking (PVAB) was programmed to the shortest possible value of 60 ms in the study group, and to an individually determined optimized value in the control group. Atrial sensing threshold was programmed to 0.3 mV in both groups. False positive mode switch caused by FFS was evaluated at one and three months post implantation. A total of 204 patients (121 male; age 73±10 years) were included in the study. False positive mode switch caused by FFS was detected in one (1%) patient of the study group and two (2%) patients of the control group (p = 0.62). The use of an atrial electrode with a very short tip-to-ring spacing avoids inappropriate mode switch caused by FFS without the need for individual PVAB optimization. ClinicalTrials.gov NCT00512915.

  12. Ringed Seal Search for Global Optimization via a Sensitive Search Model.

    PubMed

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.

  13. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    NASA Astrophysics Data System (ADS)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  14. A framework for parallelized efficient global optimization with application to vehicle crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Hamza, Karim; Shalaby, Mohamed

    2014-09-01

    This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.

  15. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1993-01-01

    A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.

  16. On Global Optimal Sailplane Flight Strategy

    NASA Technical Reports Server (NTRS)

    Sander, G. J.; Litt, F. X.

    1979-01-01

    The derivation and interpretation of the necessary conditions that a sailplane cross-country flight has to satisfy to achieve the maximum global flight speed is considered. Simple rules are obtained for two specific meteorological models. The first one uses concentrated lifts of various strengths and unequal distance. The second one takes into account finite, nonuniform space amplitudes for the lifts and allows, therefore, for dolphin style flight. In both models, altitude constraints consisting of upper and lower limits are shown to be essential to model realistic problems. Numerical examples illustrate the difference with existing techniques based on local optimality conditions.

  17. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  18. Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions

    NASA Astrophysics Data System (ADS)

    Alemany, Kristina

    Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables---launch date, time of flight, and asteroid stay times (when applicable)---as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.

  19. Considering Decision Variable Diversity in Multi-Objective Optimization: Application in Hydrologic Model Calibration

    NASA Astrophysics Data System (ADS)

    Sahraei, S.; Asadzadeh, M.

    2017-12-01

    Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.

  20. Overexcitability and Optimal Flow in Talented Dancers, Singers, and Athletes

    ERIC Educational Resources Information Center

    Thomson, Paula; Jaque, S. Victoria

    2016-01-01

    Overexcitability (OE) and optimal flow are variables shared by talented individuals. This study demonstrated that the dancer (n = 86) and opera singer (n = 61) groups shared higher OE profiles compared to the athlete group (n = 50). Two self-report instruments assessed flow (global and subscales) and the five OE dimensions. All groups endorsed…

  1. Hyperfunction solutions of the zero rest mass equations and representations of LIE groups

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunne, E.G.

    1984-01-01

    Recently, hyperfunctions have arisen in an essential way in separate results in mathematical physics and in representation theory. In the setting of the twistor program, Wells, with others, has extended the Penrose transform to hyperfunction solutions of the zero rest mass equations, showing that the fundamental isomorphisms hold for this larger space. Meanwhile, Schmid has shown the existence of a canonical globalization of a Harish-Chandra module, V, to a representation of the group. This maximal globalization may be realized as the completion of V in a locally convex vector space in the hyperfunction topology. This thesis shows that the formermore » is a particular case of the latter where the globalization can be done by hand. This explicit globalization is then carried out for a more general case of the Radon transform on homogeneous spaces.« less

  2. Alternative difference analysis scheme combining R -space EXAFS fit with global optimization XANES fit for X-ray transient absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Fei; Tao, Ye; Zhao, Haifeng

    Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions.R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure changemore » in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3spin crossover complex and yielded reliable distance change and excitation population.« less

  3. Alternative difference analysis scheme combining R-space EXAFS fit with global optimization XANES fit for X-ray transient absorption spectroscopy.

    PubMed

    Zhan, Fei; Tao, Ye; Zhao, Haifeng

    2017-07-01

    Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions. R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure change in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3 spin crossover complex and yielded reliable distance change and excitation population.

  4. Clustering methods for the optimization of atomic cluster structure

    NASA Astrophysics Data System (ADS)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  5. Tuning collective communication for Partitioned Global Address Space programming models

    DOE PAGES

    Nishtala, Rajesh; Zheng, Yili; Hargrove, Paul H.; ...

    2011-06-12

    Partitioned Global Address Space (PGAS) languages offer programmers the convenience of a shared memory programming style combined with locality control necessary to run on large-scale distributed memory systems. Even within a PGAS language programmers often need to perform global communication operations such as broadcasts or reductions, which are best performed as collective operations in which a group of threads work together to perform the operation. In this study we consider the problem of implementing collective communication within PGAS languages and explore some of the design trade-offs in both the interface and implementation. In particular, PGAS collectives have semantic issues thatmore » are different than in send–receive style message passing programs, and different implementation approaches that take advantage of the one-sided communication style in these languages. We present an implementation framework for PGAS collectives as part of the GASNet communication layer, which supports shared memory, distributed memory and hybrids. The framework supports a broad set of algorithms for each collective, over which the implementation may be automatically tuned. In conclusion, we demonstrate the benefit of optimized GASNet collectives using application benchmarks written in UPC, and demonstrate that the GASNet collectives can deliver scalable performance on a variety of state-of-the-art parallel machines including a Cray XT4, an IBM BlueGene/P, and a Sun Constellation system with InfiniBand interconnect.« less

  6. A Grouping Particle Swarm Optimizer with Personal-Best-Position Guidance for Large Scale Optimization.

    PubMed

    Guo, Weian; Si, Chengyong; Xue, Yu; Mao, Yanfen; Wang, Lei; Wu, Qidi

    2017-05-04

    Particle Swarm Optimization (PSO) is a popular algorithm which is widely investigated and well implemented in many areas. However, the canonical PSO does not perform well in population diversity maintenance so that usually leads to a premature convergence or local optima. To address this issue, we propose a variant of PSO named Grouping PSO with Personal- Best-Position (Pbest) Guidance (GPSO-PG) which maintains the population diversity by preserving the diversity of exemplars. On one hand, we adopt uniform random allocation strategy to assign particles into different groups and in each group the losers will learn from the winner. On the other hand, we employ personal historical best position of each particle in social learning rather than the current global best particle. In this way, the exemplars diversity increases and the effect from the global best particle is eliminated. We test the proposed algorithm to the benchmarks in CEC 2008 and CEC 2010, which concern the large scale optimization problems (LSOPs). By comparing several current peer algorithms, GPSO-PG exhibits a competitive performance to maintain population diversity and obtains a satisfactory performance to the problems.

  7. Long-Time Behavior and Critical Limit of Subcritical SQG Equations in Scale-Invariant Sobolev Spaces

    NASA Astrophysics Data System (ADS)

    Coti Zelati, Michele

    2018-02-01

    We consider the subcritical SQG equation in its natural scale-invariant Sobolev space and prove the existence of a global attractor of optimal regularity. The proof is based on a new energy estimate in Sobolev spaces to bootstrap the regularity to the optimal level, derived by means of nonlinear lower bounds on the fractional Laplacian. This estimate appears to be new in the literature and allows a sharp use of the subcritical nature of the L^∞ bounds for this problem. As a by-product, we obtain attractors for weak solutions as well. Moreover, we study the critical limit of the attractors and prove their stability and upper semicontinuity with respect to the strength of the diffusion.

  8. Hybrid simulated annealing and its application to optimization of hidden Markov models for visual speech recognition.

    PubMed

    Lee, Jong-Seok; Park, Cheol Hoon

    2010-08-01

    We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.

  9. Optimal correction and design parameter search by modern methods of rigorous global optimization

    NASA Astrophysics Data System (ADS)

    Makino, K.; Berz, M.

    2011-07-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.

  10. Ringed Seal Search for Global Optimization via a Sensitive Search Model

    PubMed Central

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems. PMID:26790131

  11. TIGO: a geodetic observatory for the improvement of the global reference frame

    NASA Astrophysics Data System (ADS)

    Schlueter, Wolfgang; Hase, Hayo; Boeer, Armin

    1999-12-01

    The Bundesamt fuer Kartographie und Geodaesie (BKG) will provide a major contribution to the improvement and maintenance of the global reference frames: ICRF (International Celestial Reference Frame), ITRF (International Terrestrial Reference Frame) with the operation of TIGO (Transportable Integrated Geodetic Observatory). TIGO is designed as a transportable geodetic observatory which consists of all relevant geodetic space techniques for a fundamental station (including VLBI, SLR, GPS). The transportability of the observatory enables to fill up gaps in the International Space Geodetic Network and to optimize the contribution to the global reference frames. TIGO should operate for a period of 2 to 3 years (at minimum) at one location. BKG is looking for a cooperation with countries willing to contribute to the ITRF and to support the operation of TIGO.

  12. Opportunities and challenges of international coordination efforts in space exploration - the DLR perspective

    NASA Astrophysics Data System (ADS)

    Boese, Andrea

    The German Aerospace Center and German Space Agency DLR has defined internationalisation one of the four pillars of its corporate strategy. Driven by global challenges, national space agencies like DLR are seeking partnerships to contribute to essential societal needs, such as human welfare, sustainability of life, economic development, security, culture and knowledge. All partnerships with both traditional and non-traditional partners must reflect a balanced approach between national requirements and needs of the international community. In view of the challenges emerging from this complexity, endeavours like space exploration must be built on mutual cooperation especially in a challenging political environment. Effective and efficient exploitation of existing expertise, human resources, facilities and infrastructures require consolidated actions of stakeholders, interest groups and authorities. This basic principle applies to any space exploration activity. DLR is among the agencies participating in the International Space Exploration Coordination Group (ISECG) from its beginning in 2007. The strategic goals of DLR regarding space exploration correspond to the purpose of ISECG as a forum to share objectives and plans to take concrete steps towards partnerships for a globally coordinated effort in space exploration. DLR contributes to ISECG publications especially the “Global Exploration Roadmap” and the “Benefits stemming from Space Exploration” to see those messages reflected that support cooperation with internal and external exploration stakeholders in science and technology and communication with those in politics and society. DLR provides input also to other groups engaging in space exploration. However, taking into account limited resources and expected results, the effectiveness of multiple coordination and planning mechanisms needs to be discussed.

  13. Comparison of different patient positioning strategies to minimize shoulder girdle artifacts in head and neck CT.

    PubMed

    Wirth, Stefan; Meindl, Thomas; Treitl, Marcus; Pfeifer, Klaus-Jürgen; Reiser, Maximilian

    2006-08-01

    The purpose of this study was to analyze different patient positioning strategies for minimizing artifacts of the shoulder girdle in head and neck CT. Standardized CT examinations of three positioning groups were compared (P: patients pushed their shoulders downwards; D: similar optimization by a pulling device; N: no particular positioning optimization). Parameters analyzed were the length of the cervical spine not being superimposed by the shoulder girdle as well as noise in the supraclavicular space. In groups P and D, the portion of the cervical spine not superimposed was significantly larger than in group N (P: 10.4 cm; D: 10.6 cm; N: 8.5 cm). At the supraclavicular space, noise decreased significantly (P: 12.5 HU; D: 12.1 HU; N: 17.7 HU). No significant differences between the two position-optimized groups (P and D) were detected. Optimized shoulder positioning by the patient increases image quality in CT head and neck imaging. The use of a pulling device offers no additional advantages.

  14. Optimizing energy growth as a tool for finding exact coherent structures

    NASA Astrophysics Data System (ADS)

    Olvera, D.; Kerswell, R. R.

    2017-08-01

    We discuss how searching for finite-amplitude disturbances of a given energy that maximize their subsequent energy growth after a certain later time T can be used to probe the phase space around a reference state and ultimately to find other nearby solutions. The procedure relies on the fact that of all the initial disturbances on a constant-energy hypersphere, the optimization procedure will naturally select the one that lies closest to the stable manifold of a nearby solution in phase space if T is large enough. Then, when in its subsequent evolution the optimal disturbance transiently approaches the new solution, a flow state at this point can be used as an initial guess to converge the solution to machine precision. We illustrate this approach in plane Couette flow by rediscovering the spanwise-localized "snake" solutions of Schneider et al. [Phys. Rev. Lett. 104, 104501 (2010), 10.1103/PhysRevLett.104.104501], probing phase space at very low Reynolds numbers (less than 127.7 ) where the constant-shear solution is believed to be the global attractor and examining how the edge between laminar and turbulent flow evolves when stable stratification eliminates the turbulence. We also show that the steady snake solution smoothly delocalizes as unstable stratification is gradually turned on until it connects (via an intermediary global three-dimensional solution) to two-dimensional Rayleigh-Bénard roll solutions.

  15. Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation

    NASA Technical Reports Server (NTRS)

    Birge, Brian

    2012-01-01

    The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.

  16. A multi-group firefly algorithm for numerical optimization

    NASA Astrophysics Data System (ADS)

    Tong, Nan; Fu, Qiang; Zhong, Caiming; Wang, Pengjun

    2017-08-01

    To solve the problem of premature convergence of firefly algorithm (FA), this paper analyzes the evolution mechanism of the algorithm, and proposes an improved Firefly algorithm based on modified evolution model and multi-group learning mechanism (IMGFA). A Firefly colony is divided into several subgroups with different model parameters. Within each subgroup, the optimal firefly is responsible for leading the others fireflies to implement the early global evolution, and establish the information mutual system among the fireflies. And then, each firefly achieves local search by following the brighter firefly in its neighbors. At the same time, learning mechanism among the best fireflies in various subgroups to exchange information can help the population to obtain global optimization goals more effectively. Experimental results verify the effectiveness of the proposed algorithm.

  17. Evolution of the Global Space Geodesy Network

    NASA Astrophysics Data System (ADS)

    Pearlman, Michael R.; Bianco, Giuseppe; Ipatov, Alexander; Ma, Chopo; Neilan, Ruth; Noll, Carey; Park, Jong Uk; Pavlis, Erricos; Wetzel, Scott

    2013-04-01

    The improvements in the reference frame and other space geodesy data products spelled out in the GGOS 2020 plan will evolve over time as new space geodesy sites enhance the global distribution of the network and new technologies are implemented at the sites thus enabling improved data processing and analysis. The goal of 30 globally distributed core sites with VLBI, SLR, GNSS and DORIS (where available) will take time to materialize. Co-location sites with less than the full core complement will continue to play a very important role in filling out the network while it is evolving and even after full implementation. GGOS through its Call for Participation, bi-lateral and multi-lateral discussions and work through the scientific Services has been encouraging current groups to upgrade and new groups to join the activity. This talk will give an update on the current expansion of the global network and the projection for the network configuration that we forecast over the next 10 years.

  18. Small worlds in space: Synchronization, spatial and relational modularity

    NASA Astrophysics Data System (ADS)

    Brede, M.

    2010-06-01

    In this letter we investigate networks that have been optimized to realize a trade-off between enhanced synchronization and cost of wire to connect the nodes in space. Analyzing the evolved arrangement of nodes in space and their corresponding network topology, a class of small-world networks characterized by spatial and network modularity is found. More precisely, for low cost of wire optimal configurations are characterized by a division of nodes into two spatial groups with maximum distance from each other, whereas network modularity is low. For high cost of wire, the nodes organize into several distinct groups in space that correspond to network modules connected on a ring. In between, spatially and relationally modular small-world networks are found.

  19. Optimal space of linear classical observables for Maxwell k-forms via spacelike and timelike compact de Rham cohomologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benini, Marco, E-mail: mbenini87@gmail.com, E-mail: mbenini@uni-potsdam.de

    Being motivated by open questions in gauge field theories, we consider non-standard de Rham cohomology groups for timelike compact and spacelike compact support systems. These cohomology groups are shown to be isomorphic respectively to the usual de Rham cohomology of a spacelike Cauchy surface and its counterpart with compact support. Furthermore, an analog of the usual Poincaré duality for de Rham cohomology is shown to hold for the case with non-standard supports as well. We apply these results to find optimal spaces of linear observables for analogs of arbitrary degree k of both the vector potential and the Faraday tensor.more » The term optimal has to be intended in the following sense: The spaces of linear observables we consider distinguish between different configurations; in addition to that, there are no redundant observables. This last point in particular heavily relies on the analog of Poincaré duality for the new cohomology groups.« less

  20. Doppler lidar wind measurement on Eos

    NASA Technical Reports Server (NTRS)

    Fitzjarrald, D.; Bilbro, J.; Beranek, R.; Mabry, J.

    1985-01-01

    A polar-orbiting platform segment of the Earth Observing System (EOS) could carry a CO2-laser based Doppler lidar for recording global wind profiles. Development goals would include the manufacture of a 10 J laser with a 2 yr operational life, space-rating the optics and associated software, and the definition of models for global aerosol distributions. Techniques will be needed for optimal scanning and generating computer simulations which will provide adequately accurate weather predictions.

  1. A connectionist model for diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Peng, Yun; Reggia, James A.

    1989-01-01

    A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.

  2. Motion prediction of a non-cooperative space target

    NASA Astrophysics Data System (ADS)

    Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan

    2018-01-01

    Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.

  3. On the existence of global solutions of the one-dimensional cubic NLS for initial data in the modulation space Mp,q (R)

    NASA Astrophysics Data System (ADS)

    Chaichenets, Leonid; Hundertmark, Dirk; Kunstmann, Peer; Pattakos, Nikolaos

    2017-10-01

    We prove global existence for the one-dimensional cubic nonlinear Schrödinger equation in modulation spaces Mp,p‧ for p sufficiently close to 2. In contrast to known results, [9] and [14], our result requires no smallness condition on initial data. The proof adapts a splitting method inspired by work of Vargas-Vega, Hyakuna-Tsutsumi and Grünrock to the modulation space setting and exploits polynomial growth of the free Schrödinger group on modulation spaces.

  4. Improved Space Surveillance Network (SSN) Scheduling using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Stottler, D.

    There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.

  5. Optimization of Composite Structures with Curved Fiber Trajectories

    NASA Astrophysics Data System (ADS)

    Lemaire, Etienne; Zein, Samih; Bruyneel, Michael

    2014-06-01

    This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.

  6. Optimization of Microelectronic Devices for Sensor Applications

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    2000-01-01

    The NASA/JPL goal to reduce payload in future space missions while increasing mission capability demands miniaturization of active and passive sensors, analytical instruments and communication systems among others. Currently, typical system requirements include the detection of particular spectral lines, associated data processing, and communication of the acquired data to other systems. Advances in lithography and deposition methods result in more advanced devices for space application, while the sub-micron resolution currently available opens a vast design space. Though an experimental exploration of this widening design space-searching for optimized performance by repeated fabrication efforts-is unfeasible, it does motivate the development of reliable software design tools. These tools necessitate models based on fundamental physics and mathematics of the device to accurately model effects such as diffraction and scattering in opto-electronic devices, or bandstructure and scattering in heterostructure devices. The software tools must have convenient turn-around times and interfaces that allow effective usage. The first issue is addressed by the application of high-performance computers and the second by the development of graphical user interfaces driven by properly developed data structures. These tools can then be integrated into an optimization environment, and with the available memory capacity and computational speed of high performance parallel platforms, simulation of optimized components can proceed. In this paper, specific applications of the electromagnetic modeling of infrared filtering, as well as heterostructure device design will be presented using genetic algorithm global optimization methods.

  7. Globalizing animal care and use: making the dream a reality

    NASA Technical Reports Server (NTRS)

    Bielitzki, J. T.

    1999-01-01

    What will drive the globalization of animal care in the 21st century? Will targeted concerns from privately funded groups be the motivating factors? Will the threat of disease, and the concomitant hue and cry from a threatened public, be responsible for bringing nations together to collaborate on rules and regulations? The author, who is responsible for the animal use protocols on NASA's space station, explains why globalization on Planet Earth may be guided by what happens in outer space.

  8. Globalizing animal care and use: making the dream a reality.

    PubMed

    Bielitzki, J T

    1999-11-01

    What will drive the globalization of animal care in the 21st century? Will targeted concerns from privately funded groups be the motivating factors? Will the threat of disease, and the concomitant hue and cry from a threatened public, be responsible for bringing nations together to collaborate on rules and regulations? The author, who is responsible for the animal use protocols on NASA's space station, explains why globalization on Planet Earth may be guided by what happens in outer space.

  9. The Deep Space Gateway Lightning Mapper (DLM) - Monitoring Global Change and Thunderstorm Processes Through Observations of Earth's High-Latitude Lightning from Cis-Lunar Orbit

    NASA Technical Reports Server (NTRS)

    Lang, Timothy; Blakeslee, R. J.; Cecil, D. J.; Christian, H. J.; Gatlin, P. N.; Goodman, S. J.; Koshak, W. J.; Petersen, W. A.; Quick, M.; Schultz, C. J.; hide

    2018-01-01

    Function: Monitor global change and thunderstorm processes through observations of Earth's high-latitude lightning. This instrument will combine long-lived sampling of individual thunderstorms with long-term observations of lightning at high latitudes: How is global change affecting thunderstorm patterns; How do high-latitude thunderstorms differ from low-latitude? Why is the Gateway the optimal facility for this instrument / research: Expected DSG (Deep Space Gateway) orbits will provide nearly continuous viewing of the Earth's high latitudes (50 degrees latitude and poleward); These regions are not well covered by existing lightning mappers (e.g., Lightning Imaging Sensor / LIS, or Geostationary Lightning Mapper / GLM); Polar, Molniya, Tundra, etc. Earth orbits have significant drawbacks related to continuous coverage and/or stable FOVs (Fields of View).

  10. Application of Modified Particle Swarm Optimization Method for Parameter Extraction of 2-D TEC Mapping

    NASA Astrophysics Data System (ADS)

    Toker, C.; Gokdag, Y. E.; Arikan, F.; Arikan, O.

    2012-04-01

    Ionosphere is a very important part of Space Weather. Modeling and monitoring of ionospheric variability is a major part of satellite communication, navigation and positioning systems. Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path, is one of the parameters to investigate the ionospheric variability. Dual-frequency GPS receivers, with their world wide availability and efficiency in TEC estimation, have become a major source of global and regional TEC modeling. When Global Ionospheric Maps (GIM) of International GPS Service (IGS) centers (http://iono.jpl.nasa.gov/gim.html) are investigated, it can be observed that regional ionosphere along the midlatitude regions can be modeled as a constant, linear or a quadratic surface. Globally, especially around the magnetic equator, the TEC surfaces resemble twisted and dispersed single centered or double centered Gaussian functions. Particle Swarm Optimization (PSO) proved itself as a fast converging and an effective optimization tool in various diverse fields. Yet, in order to apply this optimization technique into TEC modeling, the method has to be modified for higher efficiency and accuracy in extraction of geophysical parameters such as model parameters of TEC surfaces. In this study, a modified PSO (mPSO) method is applied to regional and global synthetic TEC surfaces. The synthetic surfaces that represent the trend and small scale variability of various ionospheric states are necessary to compare the performance of mPSO over number of iterations, accuracy in parameter estimation and overall surface reconstruction. The Cramer-Rao bounds for each surface type and model are also investigated and performance of mPSO are tested with respect to these bounds. For global models, the sample points that are used in optimization are obtained using IGS receiver network. For regional TEC models, regional networks such as Turkish National Permanent GPS Network (TNPGN-Active) receiver sites are used. The regional TEC models are grouped into constant (one parameter), linear (two parameters), and quadratic (six parameters) surfaces which are functions of latitude and longitude. Global models require seven parameters for single centered Gaussian and 13 parameters for double centered Gaussian function. The error criterion is the normalized percentage error for both the surface and the parameters. It is observed that mPSO is very successful in parameter extraction of various regional and global models. The normalized reconstruction error varies from 10-4 for constant surfaces to 10-3 for quadratic surfaces in regional models, sampled with regional networks. Even for the cases of a severe geomagnetic storm that affects measurements globally, with IGS network, the reconstruction error is on the order of 10-1 even though individual parameters have higher normalized errors. The modified PSO technique proved itself to be a useful tool for parameter extraction of more complicated TEC models. This study is supported by TUBITAK EEEAG under Grant No: 109E055.

  11. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.

  12. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  13. Program to study optimal protocol for cardiovascular and muscular efficiency. [physical fitness training for manned space flight

    NASA Technical Reports Server (NTRS)

    Olree, H. D.

    1974-01-01

    Training programs necessary for the development of optimal strength during prolonged manned space flight were examined, and exercises performed on the Super Mini Gym Skylab 2 were compared with similar exercises on the Universal Gym and calisthenics. Cardiopulmonary gains were found negligible but all training groups exhibited good gains in strength.

  14. Application of firefly algorithm to the dynamic model updating problem

    NASA Astrophysics Data System (ADS)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  15. RandSpg: An open-source program for generating atomistic crystal structures with specific spacegroups

    NASA Astrophysics Data System (ADS)

    Avery, Patrick; Zurek, Eva

    2017-04-01

    A new algorithm, RANDSPG, that can be used to generate trial crystal structures with specific space groups and compositions is described. The program has been designed for systems where the atoms are independent of one another, and it is therefore primarily suited towards inorganic systems. The structures that are generated adhere to user-defined constraints such as: the lattice shape and size, stoichiometry, set of space groups to be generated, and factors that influence the minimum interatomic separations. In addition, the user can optionally specify if the most general Wyckoff position is to be occupied or constrain select atoms to specific Wyckoff positions. Extensive testing indicates that the algorithm is efficient and reliable. The library is lightweight, portable, dependency-free and is published under a license recognized by the Open Source Initiative. A web interface for the algorithm is publicly accessible at http://xtalopt.openmolecules.net/randSpg/randSpg.html. RANDSPG has also been interfaced with the XTALOPT evolutionary algorithm for crystal structure prediction, and it is illustrated that the use of symmetric lattices in the first generation of randomly created individuals decreases the number of structures that need to be optimized to find the global energy minimum.

  16. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    PubMed

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  17. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    NASA Astrophysics Data System (ADS)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  18. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human planner intervention. A comparison of the results with the optimized solution obtained using a similar optimization model but with human planner intervention revealed that the proposed algorithm produced optimized plans superior to that developed using the manual plan. The proposed algorithm can generate admissible solutions within reasonable computational times and can be used to develop fully automated IMRT treatment planning methods, thus reducing human planners' workloads during iterative processes. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Time Resolved Temperature Measurement of Hypervelocity Impact Generated Plasma Using a Global Optimization Method

    NASA Astrophysics Data System (ADS)

    Hew, Y. M.; Linscott, I.; Close, S.

    2015-12-01

    Meteoroids and orbital debris, collectively referred to as hypervelocity impactors, travel between 7 and 72 km/s in free space. Upon their impact onto the spacecraft, the energy conversion from kinetic to ionization/vaporization occurs within a very brief timescale and results in a small and dense expanding plasma with a very strong optical flash. The radio frequency (RF) emission produced by this plasma can potentially lead to electrical anomalies within the spacecraft. In addition, space weather, such as solar activity and background plasma, can establish spacecraft conditions which can exaggerate the damages done by these impacts. During the impact, a very strong impact flash will be generated. Through the studying of this emission spectrum of the impact, we hope to study the impact generated gas cloud/plasma properties. The impact flash emitted from a ground-based hypervelocity impact test is long expected by many scientists to contain the characteristics of the impact generated plasma, such as plasma temperature and density. This paper presents a method for the time-resolved plasma temperature estimation using three-color visible band photometry data with a global pattern search optimization method. The equilibrium temperature of the plasma can be estimated using an optical model which accounts for both the line emission and continuum emission from the plasma. Using a global pattern search based optimizer, the model can isolate the contribution of the continuum emission versus the line emission from the plasma. The plasma temperature can thus be estimated. Prior to the optimization step, a Gaussian process is also applied to extract the optical emission signal out of the noisy background. The resultant temperature and line-to-continuum emission weighting factor are consistent with the spectrum of the impactor material and current literature.

  20. Cascade Optimization Strategy Maximizes Thrust for High-Speed Civil Transport Propulsion System Concept

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The design of a High-Speed Civil Transport (HSCT) air-breathing propulsion system for multimission, variable-cycle operations was successfully optimized through a soft coupling of the engine performance analyzer NASA Engine Performance Program (NEPP) to a multidisciplinary optimization tool COMETBOARDS that was developed at the NASA Lewis Research Center. The design optimization of this engine was cast as a nonlinear optimization problem, with engine thrust as the merit function and the bypass ratios, r-values of fans, fuel flow, and other factors as important active design variables. Constraints were specified on factors including the maximum speed of the compressors, the positive surge margins for the compressors with specified safety factors, the discharge temperature, the pressure ratios, and the mixer extreme Mach number. Solving the problem by using the most reliable optimization algorithm available in COMETBOARDS would provide feasible optimum results only for a portion of the aircraft flight regime because of the large number of mission points (defined by altitudes, Mach numbers, flow rates, and other factors), diverse constraint types, and overall poor conditioning of the design space. Only the cascade optimization strategy of COMETBOARDS, which was devised especially for difficult multidisciplinary applications, could successfully solve a number of engine design problems for their flight regimes. Furthermore, the cascade strategy converged to the same global optimum solution even when it was initiated from different design points. Multiple optimizers in a specified sequence, pseudorandom damping, and reduction of the design space distortion via a global scaling scheme are some of the key features of the cascade strategy. HSCT engine concept, optimized solution for HSCT engine concept. A COMETBOARDS solution for an HSCT engine (Mach-2.4 mixed-flow turbofan) along with its configuration is shown. The optimum thrust is normalized with respect to NEPP results. COMETBOARDS added value in the design optimization of the HSCT engine.

  1. Stochastic Optimal Control via Bellman's Principle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Sun, Jian Q.

    2003-01-01

    This paper presents a method for finding optimal controls of nonlinear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in a region of the state space are obtained. The approach is based on Bellman's Principle of optimality, the Gaussian closure and the Short-time Gaussian approximation. Examples include a system with a state-dependent diffusion term, a system in which the infinite hierarchy of moment equations cannot be analytically closed, and an impact system with a elastic boundary. The uncontrolled and controlled dynamics are studied by creating a Markov chain with a control dependent transition probability matrix via the Generalized Cell Mapping method. In this fashion, both the transient and stationary controlled responses are evaluated. The results show excellent control performances.

  2. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  3. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  4. Global carbon assimilation system using a local ensemble Kalman filter with multiple ecosystem models

    NASA Astrophysics Data System (ADS)

    Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze

    2014-11-01

    In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.

  5. Development of a long wave infrared detector for SGLI instrument

    NASA Astrophysics Data System (ADS)

    Dariel, Aurélien; Chorier, P.; Reeb, N.; Terrier, B.; Vuillermet, M.; Tribolet, P.

    2007-10-01

    The Japanese Aerospace Exploration Agency (JAXA) will be conducting the Global Change Observation Mission (GCOM) for monitoring of global environmental change. SGLI (Second Generation Global Imager) is an optical sensor on board GCOM-C (Climate), that includes a Long Wave IR Detector (LWIRD) sensitive up to about 13 μm. SGLI will provide high accuracy measurements of the atmosphere (aerosol, cloud ...), the cryosphere (glaciers, snow, sea ice ...), the biomass and the Earth temperature (sea and land). Sofradir is a major supplier of Space industry based on the use of a Space qualified MCT technology for detectors from 0.8 to 15 μm. This mature and reproducible technology has been used for 15 years to produce thousands of LWIR detectors with cut-off wavelengths between 9 and 12 μm. NEC Toshiba Space, prime contractor for the Second Generation Global Imager (SGLI), has selected SOFRADIR for its heritage in space projects and Mercury Cadmium Telluride (MCT) detectors to develop the LWIR detector. This detector includes two detection circuits for detection at 10.8 μm and 12.0 μm, hybridized on a single CMOS readout circuit. Each detection circuit is made of 20x2 square pixels of 140 μm. In order to optimize the overall performance, each pixel is made of 5x5 square sub-pixels of 28 μm and the readout circuit enables sub-pixel deselection. The MCT material and the photovoltaic technology are adapted to maximize response for the requested bandwidths: cut-off wavelengths of the 2 detection circuits are 12.6 and 13.4 μm at 55K. This detector is packaged into a sealed housing for full integration into a Dewar at 55K. This paper describes the main technical requirements, the design features of this detector, including trade-offs regarding performance optimization, and presents preliminary electro-optical results.

  6. Neural-network-assisted genetic algorithm applied to silicon clusters

    NASA Astrophysics Data System (ADS)

    Marim, L. R.; Lemes, M. R.; dal Pino, A.

    2003-03-01

    Recently, a new optimization procedure that combines the power of artificial neural-networks with the versatility of the genetic algorithm (GA) was introduced. This method, called neural-network-assisted genetic algorithm (NAGA), uses a neural network to restrict the search space and it is expected to speed up the solution of global optimization problems if some previous information is available. In this paper, we have tested NAGA to determine the ground-state geometry of Sin (10⩽n⩽15) according to a tight-binding total-energy method. Our results indicate that NAGA was able to find the desired global minimum of the potential energy for all the test cases and it was at least ten times faster than pure genetic algorithm.

  7. Hybrid Power Management Program Continued

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.

    2002-01-01

    Hybrid Power Management (HPM) is the innovative integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications. The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The advanced power devices include ultracapacitors and photovoltaics. HPM has extremely wide potential with applications including power-generation, transportation, biotechnology, and space power systems. It may significantly alleviate global energy concerns, improve the environment, and stimulate the economy.

  8. Dynamic positioning configuration and its first-order optimization

    NASA Astrophysics Data System (ADS)

    Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu

    2014-02-01

    Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.

  9. Characteristics and Trade-Offs of Doppler Lidar Global Wind Profiling

    NASA Technical Reports Server (NTRS)

    Kavaya, Michael J.; Emmitt, G David

    2004-01-01

    Accurate, global profiling of wind velocity is highly desired by NASA, NOAA, the DOD/DOC/NASA Integrated Program Office (IPO)/NPOESS, DOD, and others for many applications such as validation and improvement of climate models, and improved weather prediction. The most promising technology to deliver this measurement from space is Doppler Wind Lidar (DWL). The NASA/NOAA Global Tropospheric Wind Sounder (GTWS) program is currently in the process of generating the science requirements for a space-based sensor. In order to optimize the process of defining science requirements, it is important for the scientific and user community to understand the nature of the wind measurements that DWL can make. These measurements are very different from those made by passive imaging sensors or by active radar sensors. The purpose of this paper is to convey the sampling characteristics and data product trade-offs of an orbiting DWL.

  10. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less

  11. Water Cycle Missions for the Next Decade

    NASA Astrophysics Data System (ADS)

    Houser, P. R.

    2013-12-01

    The global water cycle describes the circulation of water as a vital and dynamic substance in its liquid, solid, and vapor phases as it moves through the atmosphere, oceans and land. Life in its many forms exists because of water, and modern civilization depends on learning how to live within the constraints imposed by the availability of water. The scientific challenge posed by the need to observe the global water cycle is to integrate in situ and space-borne observations to quantify the key water-cycle state variables and fluxes. The vision to address that challenge is a series of Earth observation missions that will measure the states, stocks, flows, and residence times of water on regional to global scales followed by a series of coordinated missions that will address the processes, on a global scale, that underlie variability and changes in water in all its three phases. The accompanying societal challenge is to foster the improved use of water data and information as a basis for enlightened management of water resources, to protect life and property from effects of extremes in the water cycle. A major change in thinking about water science that goes beyond its physics to include its role in ecosystems and society is also required. Better water-cycle observations, especially on the continental and global scales, will be essential. Water-cycle predictions need to be readily available globally to reduce loss of life and property caused by water-related natural hazards. Building on the 2007 Earth Science Decadal Survey, NASA's Plan for a Climate-Centric Architecture for Earth Observations and Applications from Space , and the 2012 Chapman Conference on Remote Sensing of the Terrestrial Water Cycle, a workshop was held in April 2013 to gather wisdom and determine how to prepare for the next generation of water cycle missions in support of the second Earth Science Decadal Survey. This talk will present the outcomes of the workshop including the intersection between science questions, technology readiness and satellite design optimization. A series of next-generation water cycle mission working groups were proposed and white papers, designed to identify capacity gaps and inform NASA were developed. The workshop identified several visions for the next decade of water cycle satellite observations, and developed a roadmap and action plan for developing the foundation for these missions. Achieving this outcome will result in optimized community investments and better functionality of these future missions, and will help to foster broader range of scientists and professionals engaged in water cycle observation planning and development around the country, and the world.

  12. Memoryless cooperative graph search based on the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Hou, Jian; Yan, Gang-Feng; Fan, Zhen

    2011-04-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.

  13. Target selection and comparison of mission design for space debris removal by DLR's advanced study group

    NASA Astrophysics Data System (ADS)

    van der Pas, Niels; Lousada, Joao; Terhes, Claudia; Bernabeu, Marc; Bauer, Waldemar

    2014-09-01

    Space debris is a growing problem. Models show that the Kessler syndrome, the exponential growth of debris due to collisions, has become unavoidable unless an active debris removal program is initiated. The debris population in LEO with inclination between 60° and 95° is considered as the most critical zone. In order to stabilize the debris population in orbit, especially in LEO, 5 to 10 objects will need to be removed every year. The unique circumstances of such a mission could require that several objects are removed with a single launch. This will require a mission to rendezvous with a multitude of objects orbiting on different altitudes, inclinations and planes. Removal models have assumed that the top priority targets will be removed first. However this will lead to a suboptimal mission design and increase the ΔV-budget. Since there is a multitude of targets to choose from, the targets can be selected for an optimal mission design. In order to select a group of targets for a removal mission the orbital parameters and political constraints should also be taken into account. Within this paper a number of the target selection criteria are presented. The possible mission targets and their order of retrieval is dependent on the mission architecture. A comparison between several global mission architectures is given. Under consideration are 3 global missions of which a number of parameters are varied. The first mission launches multiple separate deorbit kits. The second launches a mother craft with deorbit kits. The third launches an orbital tug which pulls the debris in a lower orbit, after which a deorbit kit performs the final deorbit burn. A RoM mass and cost comparison is presented. The research described in this paper has been conducted as part of an active debris removal study by the Advanced Study Group (ASG). The ASG is an interdisciplinary student group working at the DLR, analyzing existing technologies and developing new ideas into preliminary concepts.

  14. Towards global optimization with adaptive simulated annealing

    NASA Astrophysics Data System (ADS)

    Forbes, Gregory W.; Jones, Andrew E.

    1991-01-01

    The structure of the simulated annealing algorithm is presented and its rationale is discussed. A unifying heuristic is then introduced which serves as a guide in the design of all of the sub-components of the algorithm. Simply put this heuristic principle states that at every cycle in the algorithm the occupation density should be kept as close as possible to the equilibrium distribution. This heuristic has been used as a guide to develop novel step generation and temperature control methods intended to improve the efficiency of the simulated annealing algorithm. The resulting algorithm has been used in attempts to locate good solutions for one of the lens design problems associated with this conference viz. the " monochromatic quartet" and a sample of the results is presented. 1 Global optimization in the context oflens design Whatever the context optimization algorithms relate to problems that take the following form: Given some configuration space with coordinates r (x1 . . x) and a merit function written asffr) find the point r whereftr) takes it lowest value. That is find the global minimum. In many cases there is also a set of auxiliary constraints that must be met so the problem statement becomes: Find the global minimum of the merit function within the region defined by E. (r) 0 j 1 2 . . . p and 0 j 1 2 . . . q.

  15. MO-AB-BRA-01: A Global Level Set Based Formulation for Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, D; Lyu, Q; Ruan, D

    2016-06-15

    Purpose: The current clinical Volumetric Modulated Arc Therapy (VMAT) optimization is formulated as a non-convex problem and various greedy heuristics have been employed for an empirical solution, jeopardizing plan consistency and quality. We introduce a novel global direct aperture optimization method for VMAT to overcome these limitations. Methods: The global VMAT (gVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term and an anisotropic total variation term. A level set function was used to describe the aperture shapes and adjacent aperture shapes were penalized to control MLC motion range. An alternating optimization strategy was implemented to solvemore » the fluence intensity and aperture shapes simultaneously. Single arc gVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme (GBM), lung (LNG), and 2 head and neck cases—one with 3 PTVs (H&N3PTV) and one with 4 PTVs (H&N4PTV). The plans were compared against the clinical VMAT (cVMAT) plans utilizing two overlapping coplanar arcs. Results: The optimization of the gVMAT plans had converged within 600 iterations. gVMAT reduced the average max and mean OAR dose by 6.59% and 7.45% of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N3PTV case. PTV coverages (D95, D98, D99) were within 0.25% of the prescription dose. By globally considering all beams, the gVMAT optimizer allowed some beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel VMAT approach allows for the search of an optimal plan in the global solution space and generates deliverable apertures directly. The single arc VMAT approach fully utilizes the digital linacs’ capability in dose rate and gantry rotation speed modulation. Varian Medical Systems, NIH grant R01CA188300, NIH grant R43CA183390.« less

  16. NASA Space Launch System: A Cornerstone Capability for Exploration

    NASA Technical Reports Server (NTRS)

    Creech, Stephen D.; Robinson, Kimberly F.

    2014-01-01

    Under construction today, the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS), managed at the Marshall Space Flight Center, will provide a robust new capability for human and robotic exploration beyond Earth orbit. The vehicle's initial configuration, sched will enable human missions into lunar space and beyond, as well as provide game-changing benefits for space science missions, including offering substantially reduced transit times for conventionally designed spacecraft. From there, the vehicle will undergo a series of block upgrades via an evolutionary development process designed to expedite mission capture as capability increases. The Space Launch System offers multiple benefits for a variety of utilization areas. From a mass-lift perspective, the initial configuration of the vehicle, capable of delivering 70 metric tons (t) to low Earth orbit (LEO), will be the world's most powerful launch vehicle. Optimized for missions beyond Earth orbit, it will also be the world's only exploration-class launch vehicle capable of delivering 25 t to lunar orbit. The evolved configuration, with a capability of 130 t to LEO, will be the most powerful launch vehicle ever flown. From a volume perspective, SLS will be compatible with the payload envelopes of contemporary launch vehicles, but will also offer options for larger fairings with unprecedented volume-lift capability. The vehicle's mass-lift capability also means that it offers extremely high characteristic energy for missions into deep space. This paper will discuss the impacts that these factors - mass-lift, volume, and characteristic energy - have on a variety of mission classes, particularly human exploration and space science. It will address the vehicle's capability to enable existing architectures for deep-space exploration, such as those documented in the Global Exploration Roadmap, a capabilities-driven outline for future deep-space voyages created by the International Space Exploration Coordination Group, which represents 14 of the world's space agencies. In addition, this paper will detail this new rocket's capability to support missions beyond the human exploration roadmap, including robotic precursor missions to other worlds or uniquely high-mass space operation facilities in Earth orbit. As this paper will explain, the SLS Program is currently building a global infrastructure asset that will provide robust space launch capability to deliver sustainable solutions for exploration.

  17. NASA's Space Launch System: A Cornerstone Capability for Exploration

    NASA Technical Reports Server (NTRS)

    Creech, Stephen D.

    2014-01-01

    Under construction today, the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS), managed at the Marshall Space Flight Center, will provide a robust new capability for human and robotic exploration beyond Earth orbit. The vehicle's initial configuration, scheduled for first launch in 2017, will enable human missions into lunar space and beyond, as well as provide game-changing benefits for space science missions, including offering substantially reduced transit times for conventionally designed spacecraft. From there, the vehicle will undergo a series of block upgrades via an evolutionary development process designed to expedite mission capture as capability increases. The Space Launch System offers multiple benefits for a variety of utilization areas. From a mass-lift perspective, the initial configuration of the vehicle, capable of delivering 70 metric tons (t) to low Earth orbit (LEO), will be the world's most powerful launch vehicle. Optimized for missions beyond Earth orbit, it will also be the world's only exploration-class launch vehicle capable of delivering 25 t to lunar orbit. The evolved configuration, with a capability of 130 t to LEO, will be the most powerful launch vehicle ever flown. From a volume perspective, SLS will be compatible with the payload envelopes of contemporary launch vehicles, but will also offer options for larger fairings with unprecedented volume-lift capability. The vehicle's mass-lift capability also means that it offers extremely high characteristic energy for missions into deep space. This paper will discuss the impacts that these factors - mass-lift, volume, and characteristic energy - have on a variety of mission classes, particularly human exploration and space science. It will address the vehicle's capability to enable existing architectures for deep-space exploration, such as those documented in the Global Exploration Roadmap, a capabilities-driven outline for future deep-space voyages created by the International Space Exploration Coordination Group, which represents 12 of the world's space agencies. In addition, this paper will detail this new rocket's capability to support missions beyond the human exploration roadmap, including robotic precursor missions to other worlds or uniquely high-mass space operation facilities in Earth orbit. As this paper will explain, the SLS Program is currently building a global infrastructure asset that will provide robust space launch capability to deliver sustainable solutions for exploration.

  18. Kinematically Optimal Robust Control of Redundant Manipulators

    NASA Astrophysics Data System (ADS)

    Galicki, M.

    2017-12-01

    This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.

  19. Designing a Unique Single Point Cross Over Method

    NASA Technical Reports Server (NTRS)

    Wilson, Richard Phillip

    2002-01-01

    The idea behind genetic algorithms is to extract optimization strategies nature uses successfully - known as Darwinian Evolution - and transform them for application in mathematical optimization theory to find the global optimum in a defined phase space. One could imagine a population of individual 'explorers' sent into the optimization phase-space. Each explorer is defined by its genes, what means, its position inside the phase-space is coded in his genes. Every explorer has the duty to find a value of the quality of his position in the phase space. (Consider the phase-space being a number of variables in some technological process, the value of quality of any position in the phase space - in other words: any set of the variables - can be expressed by the yield of the desired chemical product.) Then the struggle of 'life' begins. The three fundamental principles are selection, mating/crossover, and mutation. Only explorers (= genes) sitting on the best places will reproduce and create a new population. This is performed in the second step (mating/crossover). The 'hope' behind this part of the algorithm is, that 'good' sections of two parents will be recombined to yet better fitting children. In fact, many of the created children will not be successful (as in biological evolution), but a few children will indeed fulfill this hope. These good sections are named in some publications as building blocks. Now there appears a problem. Repeating these steps, no new area would be explored. The two former steps would only exploit the already known regions in the phase space, which could lead to premature convergence of the algorithm with the consequence of missing the global optimum by exploiting some local optimum. The third step, mutation, ensures the necessary accidental effects. One can imagine the new population being mixed up a little bit to bring some new information into this set of genes. Whereas in biology a gene is described as a macro-molecule with four different bases to code the genetic information, a gene in genetic algorithms is usually defined as a bitstring (a sequence of b 1's and 0's).

  20. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  1. Space-Group Symmetries Generate Chaotic Fluid Advection in Crystalline Granular Media

    NASA Astrophysics Data System (ADS)

    Turuban, R.; Lester, D. R.; Le Borgne, T.; Méheust, Y.

    2018-01-01

    The classical connection between symmetry breaking and the onset of chaos in dynamical systems harks back to the seminal theory of Noether [Transp. Theory Statist. Phys. 1, 186 (1918), 10.1080/00411457108231446]. We study the Lagrangian kinematics of steady 3D Stokes flow through simple cubic and body-centered cubic (bcc) crystalline lattices of close-packed spheres, and uncover an important exception. While breaking of point-group symmetries is a necessary condition for chaotic mixing in both lattices, a further space-group (glide) symmetry of the bcc lattice generates a transition from globally regular to globally chaotic dynamics. This finding provides new insights into chaotic mixing in porous media and has significant implications for understanding the impact of symmetries upon generic dynamical systems.

  2. A Novel Hybrid Firefly Algorithm for Global Optimization.

    PubMed

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate.

  3. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  4. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  5. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    NASA Astrophysics Data System (ADS)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert

    2018-05-01

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.

  6. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE PAGES

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; ...

    2018-05-29

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  7. A new governance space for health

    PubMed Central

    Kickbusch, Ilona; Szabo, Martina Marianna Cassar

    2014-01-01

    Global health refers to ‘those health issues which transcend national boundaries and governments and call for actions on the global forces and global flows that determine the health of people’. (Kickbusch 2006) Governance in this trans-national and cross-cutting arena can be analyzed along three political spaces: global health governance, global governance for health, and governance for global health. It is argued that the management of the interface between these three political spaces of governance in the global public health domain is becoming increasingly important in order to move the global health agenda forward. Global health governance refers mainly to those institutions and processes of governance which are related to an explicit health mandate, such as the World Health Organization; global governance for health refers mainly to those institutions and processes of global governance which have a direct and indirect health impact, such as the United Nations, World Trade Organization or the Human Rights Council; governance for global health refers to the institutions and mechanisms established at the national and regional level to contribute to global health governance and/or to governance for global health – such as national global health strategies or regional strategies for global health. It can also refer to club strategies, such as agreements by a group of countries such as the BRICS. In all three political spaces, the involvement of a multitude of state and non-state actors has become the norm – that is why issues of legitimacy, accountability and transparency have moved to the fore. The transnational nature of global health will require the engagement of all actors to produce global public goods for health (GPGH) and to ensure a rules-based and reliably financed global public health domain. PMID:24560259

  8. A new governance space for health.

    PubMed

    Kickbusch, Ilona; Szabo, Martina Marianna Cassar

    2014-01-01

    Global health refers to 'those health issues which transcend national boundaries and governments and call for actions on the global forces and global flows that determine the health of people'. (Kickbusch 2006) Governance in this trans-national and cross-cutting arena can be analyzed along three political spaces: global health governance, global governance for health, and governance for global health. It is argued that the management of the interface between these three political spaces of governance in the global public health domain is becoming increasingly important in order to move the global health agenda forward. Global health governance refers mainly to those institutions and processes of governance which are related to an explicit health mandate, such as the World Health Organization; global governance for health refers mainly to those institutions and processes of global governance which have a direct and indirect health impact, such as the United Nations, World Trade Organization or the Human Rights Council; governance for global health refers to the institutions and mechanisms established at the national and regional level to contribute to global health governance and/or to governance for global health--such as national global health strategies or regional strategies for global health. It can also refer to club strategies, such as agreements by a group of countries such as the BRICS. In all three political spaces, the involvement of a multitude of state and non-state actors has become the norm--that is why issues of legitimacy, accountability and transparency have moved to the fore. The transnational nature of global health will require the engagement of all actors to produce global public goods for health (GPGH) and to ensure a rules-based and reliably financed global public health domain.

  9. Group-level variations in motor representation areas of thenar and anterior tibial muscles: Navigated Transcranial Magnetic Stimulation Study.

    PubMed

    Niskanen, Eini; Julkunen, Petro; Säisänen, Laura; Vanninen, Ritva; Karjalainen, Pasi; Könönen, Mervi

    2010-08-01

    Navigated transcranial magnetic stimulation (TMS) can be used to stimulate functional cortical areas at precise anatomical location to induce measurable responses. The stimulation has commonly been focused on anatomically predefined motor areas: TMS of that area elicits a measurable muscle response, the motor evoked potential. In clinical pathologies, however, the well-known homunculus somatotopy theory may not be straightforward, and the representation area of the muscle is not fixed. Traditionally, the anatomical locations of TMS stimulations have not been reported at the group level in standard space. This study describes a methodology for group-level analysis by investigating the normal representation areas of thenar and anterior tibial muscle in the primary motor cortex. The optimal representation area for these muscles was mapped in 59 healthy right-handed subjects using navigated TMS. The coordinates of the optimal stimulation sites were then normalized into standard space to determine the representation areas of these muscles at the group-level in healthy subjects. Furthermore, 95% confidence interval ellipsoids were fitted into the optimal stimulation site clusters to define the variation between subjects in optimal stimulation sites. The variation was found to be highest in the anteroposterior direction along the superior margin of the precentral gyrus. These results provide important normative information for clinical studies assessing changes in the functional cortical areas because of plasticity of the brain. Furthermore, it is proposed that the presented methodology to study TMS locations at the group level on standard space will be a suitable tool for research purposes in population studies. 2010 Wiley-Liss, Inc.

  10. Integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades using multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1995-01-01

    This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.

  11. Optimal group size in a highly social mammal

    PubMed Central

    Markham, A. Catherine; Gesquiere, Laurence R.; Alberts, Susan C.; Altmann, Jeanne

    2015-01-01

    Group size is an important trait of social animals, affecting how individuals allocate time and use space, and influencing both an individual’s fitness and the collective, cooperative behaviors of the group as a whole. Here we tested predictions motivated by the ecological constraints model of group size, examining the effects of group size on ranging patterns and adult female glucocorticoid (stress hormone) concentrations in five social groups of wild baboons (Papio cynocephalus) over an 11-y period. Strikingly, we found evidence that intermediate-sized groups have energetically optimal space-use strategies; both large and small groups experience ranging disadvantages, in contrast to the commonly reported positive linear relationship between group size and home range area and daily travel distance, which depict a disadvantage only in large groups. Specifically, we observed a U-shaped relationship between group size and home range area, average daily distance traveled, evenness of space use within the home range, and glucocorticoid concentrations. We propose that a likely explanation for these U-shaped patterns is that large, socially dominant groups are constrained by within-group competition, whereas small, socially subordinate groups are constrained by between-group competition and predation pressures. Overall, our results provide testable hypotheses for evaluating group-size constraints in other group-living species, in which the costs of intra- and intergroup competition vary as a function of group size. PMID:26504236

  12. Global Search Capabilities of Indirect Methods for Impulsive Transfers

    NASA Astrophysics Data System (ADS)

    Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong

    2015-09-01

    An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.

  13. Research and development activities in unified control-structure modeling and design

    NASA Technical Reports Server (NTRS)

    Nayak, A. P.

    1985-01-01

    Results of work to develop a unified control/structures modeling and design capability for large space structures modeling are presented. Recent analytical results are presented to demonstrate the significant interdependence between structural and control properties. A new design methodology is suggested in which the structure, material properties, dynamic model and control design are all optimized simultaneously. Parallel research done by other researchers is reviewed. The development of a methodology for global design optimization is recommended as a long-term goal. It is suggested that this methodology should be incorporated into computer aided engineering programs, which eventually will be supplemented by an expert system to aid design optimization.

  14. Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2006-01-01

    A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.

  15. Emergence of a few distinct structures from a single formal structure type during high-throughput screening for stable compounds: The case of RbCuS and RbCuSe

    NASA Astrophysics Data System (ADS)

    Trimarchi, Giancarlo; Zhang, Xiuwen; DeVries Vermeer, Michael J.; Cantwell, Jacqueline; Poeppelmeier, Kenneth R.; Zunger, Alex

    2015-10-01

    Theoretical sorting of stable and synthesizable "missing compounds" from those that are unstable is a crucial step in the discovery of previously unknown functional materials. This active research area often involves high-throughput (HT) examination of the total energy of a given compound in a list of candidate formal structure types (FSTs), searching for those with the lowest energy within that list. While it is well appreciated that local relaxation methods based on a fixed list of structure types can lead to inaccurate geometries, this approach is widely used in HT studies because it produces answers faster than global optimization methods (that vary lattice vectors and atomic positions without local restrictions). We find, however, a different failure mode of the HT protocol: specific crystallographic classes of formal structure types each correspond to a series of chemically distinct "daughter structure types" (DSTs) that have the same space group but possess totally different local bonding configurations, including coordination types. Failure to include such DSTs in the fixed list of examined candidate structures used in contemporary high-throughput approaches can lead to qualitative misidentification of the stable bonding pattern, not just quantitative inaccuracies. In this work, we (i) clarify the understanding of the general DST-FST relationship, thus improving current discovery HT approaches, (ii) illustrate this failure mode for RbCuS and RbCuSe (the latter being a yet unreported compound and is predicted here) by developing a synthesis method and accelerated crystal-structure determination, and (iii) apply the genetic-algorithm-based global space-group optimization (GSGO) approach which is not vulnerable to the failure mode of HT searches of fixed lists, demonstrating a correct identification of the stable DST. The broad impact of items (i)-(iii) lies in the demonstrated predictive ability of a more comprehensive search strategy than what is currently used—use HT calculations as the preliminary broad screening followed by unbiased GSGO of the final candidates.

  16. The Global Space Geodesy Network and the Essential Role of Latin America Sites

    NASA Astrophysics Data System (ADS)

    Pearlman, M. R.; Ma, C.; Neilan, R.; Noll, C. E.; Pavlis, E. C.; Wetzel, S.

    2013-05-01

    The improvements in the reference frame and other space geodesy data products spelled out in the GGOS 2020 plan will evolve over time as new space geodesy sites enhance the global distribution of the network, and new technologies are implemented at current and new sites, thus enabling improved data processing and analysis. The goal of 30 globally distributed core sites with VLBI, SLR, GNSS and DORIS (where available) will take time to materialize. Co-location sites with less than the full core complement will continue to play a very important role in filling out the network while it is evolving and even after full implementation. GGOS, through its Call for Participation, bi-lateral and multi-lateral discussions, and work through the scientific Services have been encouraging current groups to upgrade and new groups to join the activity. This talk will give an update on the current expansion of the global network and the projection for the network configuration that we forecast over the next 10 years based on discussions and planning that has already occurred. We will also discuss some of the historical contributions to the reference frame from sites in Latin America and need for new sites in the future.

  17. Two-spoke placement optimization under explicit specific absorption rate and power constraints in parallel transmission at ultra-high field.

    PubMed

    Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas

    2015-06-01

    The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. The CEOS-Land Surface Imaging Constellation Portal for GEOSS: A resource for land surface imaging system information and data access

    USGS Publications Warehouse

    Holm, Thomas; Gallo, Kevin P.; Bailey, Bryan

    2010-01-01

    The Committee on Earth Observation Satellites is an international group that coordinates civil space-borne observations of the Earth, and provides the space component of the Global Earth Observing System of Systems (GEOSS). The CEOS Virtual Constellations concept was implemented in an effort to engage and coordinate disparate Earth observing programs of CEOS member agencies and ultimately facilitate their contribution in supplying the space-based observations required to satisfy the requirements of the GEOSS. The CEOS initially established Study Teams for four prototype constellations that included precipitation, land surface imaging, ocean surface topography, and atmospheric composition. The basic mission of the Land Surface Imaging (LSI) Constellation [1] is to promote the efficient, effective, and comprehensive collection, distribution, and application of space-acquired image data of the global land surface, especially to meet societal needs of the global population, such as those addressed by the nine Group on Earth Observations (GEO) Societal Benefit Areas (SBAs) of agriculture, biodiversity, climate, disasters, ecosystems, energy, health, water, and weather. The LSI Constellation Portal is the result of an effort to address important goals within the LSI Constellation mission and provide resources to assist in planning for future space missions that might further contribute to meeting those goals.

  19. Parameter Estimation of Fractional-Order Chaotic Systems by Using Quantum Parallel Particle Swarm Optimization Algorithm

    PubMed Central

    Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng

    2015-01-01

    Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158

  20. Fluid management in space construction

    NASA Technical Reports Server (NTRS)

    Snyder, Howard

    1989-01-01

    The low-g fluids management group with the Center for Space Construction is engaged in active research on the following topics: gauging; venting; controlling contamination; sloshing; transfer; acquisition; and two-phase flow. Our basic understanding of each of these topics at present is inadequate to design space structures optimally. A brief report is presented on each topic showing the present status, recent accomplishings by our group and our plans for future research. Reports are presented in graphic and outline form.

  1. Gradient descent for robust kernel-based regression

    NASA Astrophysics Data System (ADS)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  2. A theoretical investigation on optimal structures of ethane clusters (C2H6)n with n ≤ 25 and their building-up principle.

    PubMed

    Takeuchi, Hiroshi

    2011-05-01

    Geometry optimization of ethane clusters (C(2)H(6))(n) in the range of n ≤ 25 is carried out with a Morse potential. A heuristic method based on perturbations of geometries is used to locate global minima of the clusters. The following perturbations are carried out: (1) the molecule or group with the highest energy is moved to the interior of a cluster, (2) it is moved to stable positions on the surface of a cluster, and (3) orientations of one and two molecules are randomly modified. The geometry obtained after each perturbation is optimized by a quasi-Newton method. The global minimum of the dimer is consistent with that previously reported. The putative global minima of the clusters with 3 ≤ n ≤ 25 are first proposed and their building-up principle is discussed. Copyright © 2010 Wiley Periodicals, Inc.

  3. Quantum centipedes with strong global constraint

    NASA Astrophysics Data System (ADS)

    Grange, Pascal

    2017-06-01

    A centipede made of N quantum walkers on a one-dimensional lattice is considered. The distance between two consecutive legs is either one or two lattice spacings, and a global constraint is imposed: the maximal distance between the first and last leg is N  +  1. This is the strongest global constraint compatible with walking. For an initial value of the wave function corresponding to a localized configuration at the origin, the probability law of the first leg of the centipede can be expressed in closed form in terms of Bessel functions. The dispersion relation and the group velocities are worked out exactly. Their maximal group velocity goes to zero when N goes to infinity, which is in contrast with the behaviour of group velocities of quantum centipedes without global constraint, which were recently shown by Krapivsky, Luck and Mallick to give rise to ballistic spreading of extremal wave-front at non-zero velocity in the large-N limit. The corresponding Hamiltonians are implemented numerically, based on a block structure of the space of configurations corresponding to compositions of the integer N. The growth of the maximal group velocity when the strong constraint is gradually relaxed is explored, and observed to be linear in the density of gaps allowed in the configurations. Heuristic arguments are presented to infer that the large-N limit of the globally constrained model can yield finite group velocities provided the allowed number of gaps is a finite fraction of N.

  4. A quasi-global approach to improve day-time satellite surface soil moisture anomalies through land surface temperature input

    USDA-ARS?s Scientific Manuscript database

    Passive microwave observations from various space borne sensors have been linked to soil moisture of the Earth’s surface layer. The new generation passive microwave sensors are dedicated to retrieving this variable and make observations in the single, theoretically optimal L-band frequency (1-2 GHz)...

  5. Air and Space Power Journal. Volume 23, Number 2, Summer 2009

    DTIC Science & Technology

    2009-01-01

    center’s Global Threat Analysis Group (NAsic/ GTG ), whose mission is to de­ liver predictive intelligence on global integrated capabilities across the air...space, and informa­ tion domains.4 GTG analysts are charged with synthesizing intelligence data and other intel­ ligence assessments from across...theses and, occasion­ ally, PhD dissertations, and sometimes more so. in some cases, the breadth and depth required, In the GTG , we challenge our

  6. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  7. On the Performance of Linear Decreasing Inertia Weight Particle Swarm Optimization for Global Optimization

    PubMed Central

    Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

    2013-01-01

    Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

  8. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less

  9. Machine intelligence and robotics: Report of the NASA study group. Executive summary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A brief overview of applications of machine intelligence and robotics in the space program is given. These space exploration robots, global service robots to collect data for public service use on soil conditions, sea states, global crop conditions, weather, geology, disasters, etc., from Earth orbit, space industrialization and processing technologies, and construction of large structures in space. Program options for research, advanced development, and implementation of machine intelligence and robot technology for use in program planning are discussed. A vigorous and long-range program to incorporate and keep pace with state of the art developments in computer technology, both in spaceborne and ground-based computer systems is recommended.

  10. Fast-match on particle swarm optimization with variant system mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yuehuang; Fang, Xin; Chen, Jie

    2018-03-01

    Fast-Match is a fast and effective algorithm for approximate template matching under 2D affine transformations, which can match the target with maximum similarity without knowing the target gesture. It depends on the minimum Sum-of-Absolute-Differences (SAD) error to obtain the best affine transformation. The algorithm is widely used in the field of matching images because of its fastness and robustness. In this paper, our approach is to search an approximate affine transformation over Particle Swarm Optimization (PSO) algorithm. We treat each potential transformation as a particle that possesses memory function. Each particle is given a random speed and flows throughout the 2D affine transformation space. To accelerate the algorithm and improve the abilities of seeking the global excellent result, we have introduced the variant system mechanism on this basis. The benefit is that we can avoid matching with huge amount of potential transformations and falling into local optimal condition, so that we can use a few transformations to approximate the optimal solution. The experimental results prove that our method has a faster speed and a higher accuracy performance with smaller affine transformation space.

  11. The design and networking of dynamic satellite constellations for global mobile communication systems

    NASA Technical Reports Server (NTRS)

    Cullen, Cionaith J.; Benedicto, Xavier; Tafazolli, Rahim; Evans, Barry

    1993-01-01

    Various design factors for mobile satellite systems, whose aim is to provide worldwide voice and data communications to users with hand-held terminals, are examined. Two network segments are identified - the ground segment (GS) and the space segment (SS) - and are seen to be highly dependent on each other. The overall architecture must therefore be adapted to both of these segments, rather than each being optimized according to its own criteria. Terrestrial networks are grouped and called the terrestrial segment (TS). In the SS, of fundamental importance is the constellation altitude. The effect of the altitude on decisions such as constellation design choice and on network aspects like call handover statistics are fundamental. Orbit resonance is introduced and referred to throughout. It is specifically examined for its useful properties relating to GS/SS connectivities.

  12. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities

    NASA Astrophysics Data System (ADS)

    Sadeghi, Alireza; Sheikholeslami, Fatemeh; Giannakis, Georgios B.

    2018-02-01

    Small basestations (SBs) equipped with caching units have potential to handle the unprecedented demand growth in heterogeneous networks. Through low-rate, backhaul connections with the backbone, SBs can prefetch popular files during off-peak traffic hours, and service them to the edge at peak periods. To intelligently prefetch, each SB must learn what and when to cache, while taking into account SB memory limitations, the massive number of available contents, the unknown popularity profiles, as well as the space-time popularity dynamics of user file requests. In this work, local and global Markov processes model user requests, and a reinforcement learning (RL) framework is put forth for finding the optimal caching policy when the transition probabilities involved are unknown. Joint consideration of global and local popularity demands along with cache-refreshing costs allow for a simple, yet practical asynchronous caching approach. The novel RL-based caching relies on a Q-learning algorithm to implement the optimal policy in an online fashion, thus enabling the cache control unit at the SB to learn, track, and possibly adapt to the underlying dynamics. To endow the algorithm with scalability, a linear function approximation of the proposed Q-learning scheme is introduced, offering faster convergence as well as reduced complexity and memory requirements. Numerical tests corroborate the merits of the proposed approach in various realistic settings.

  13. Accuracy limit of rigid 3-point water models

    NASA Astrophysics Data System (ADS)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  14. The Space Weather Modeling Framework (SWMF): Models and Validation

    NASA Astrophysics Data System (ADS)

    Gombosi, Tamas; Toth, Gabor; Sokolov, Igor; de Zeeuw, Darren; van der Holst, Bart; Ridley, Aaron; Manchester, Ward, IV

    In the last decade our group at the Center for Space Environment Modeling (CSEM) has developed the Space Weather Modeling Framework (SWMF) that efficiently couples together different models describing the interacting regions of the space environment. Many of these domain models (such as the global solar corona, the inner heliosphere or the global magneto-sphere) are based on MHD and are represented by our multiphysics code, BATS-R-US. SWMF is a powerful tool for coupling regional models describing the space environment from the solar photosphere to the bottom of the ionosphere. Presently, SWMF contains over a dozen components: the solar corona (SC), eruptive event generator (EE), inner heliosphere (IE), outer heliosphere (OH), solar energetic particles (SE), global magnetosphere (GM), inner magnetosphere (IM), radiation belts (RB), plasmasphere (PS), ionospheric electrodynamics (IE), polar wind (PW), upper atmosphere (UA) and lower atmosphere (LA). This talk will present an overview of SWMF, new results obtained with improved physics as well as some validation studies.

  15. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. GEO Carbon and GHG Initiative Task 3: Optimizing in-situ measurements of essential carbon cycle variables across observational networks

    NASA Astrophysics Data System (ADS)

    Durden, D.; Muraoka, H.; Scholes, R. J.; Kim, D. G.; Loescher, H. W.; Bombelli, A.

    2017-12-01

    The development of an integrated global carbon cycle observation system to monitor changes in the carbon cycle, and ultimately the climate system, across the globe is of crucial importance in the 21stcentury. This system should be comprised of space and ground-based observations, in concert with modelling and analysis, to produce more robust budgets of carbon and other greenhouse gases (GHGs). A global initiative, the GEO Carbon and GHG Initiative, is working within the framework of Group on Earth Observations (GEO) to promote interoperability and provide integration across different parts of the system, particularly at domain interfaces. Thus, optimizing the efforts of existing networks and initiatives to reduce uncertainties in budgets of carbon and other GHGs. This is a very ambitious undertaking; therefore, the initiative is separated into tasks to provide actionable objectives. Task 3 focuses on the optimization of in-situ observational networks. The main objective of Task 3 is to develop and implement a procedure for enhancing and refining the observation system for identified essential carbon cycle variables (ECVs) that meets user-defined specifications at minimum total cost. This work focuses on the outline of the implementation plan, which includes a review of essential carbon cycle variables and observation technologies, mapping the ECVs performance, and analyzing gaps and opportunities in order to design an improved observing system. A description of the gap analysis of in-situ observations that will begin in the terrestrial domain to address issues of missing coordination and large spatial gaps, then extend to ocean and atmospheric observations in the future, will be outlined as the subsequent step to landscape mapping of existing observational networks.

  17. Hybrid Power Management (HPM) Program Resulted in Several New Applications

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.

    2003-01-01

    Hybrid Power Management (HPM) is the innovative integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications. The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The advanced power devices include ultracapacitors, fuel cells, and photovoltaics. HPM has extremely wide potential with applications from nanowatts to megawatts. Applications include power generation, transportation systems, biotechnology systems, and space power systems. HPM has the potential to significantly alleviate global energy concerns, improve the environment, and stimulate the economy.

  18. Illumination system development using design and analysis of computer experiments

    NASA Astrophysics Data System (ADS)

    Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter

    2015-09-01

    Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.

  19. Aeroelastic Optimization Study Based on X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley; Pak, Chan-Gi

    2014-01-01

    A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.

  20. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  1. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    PubMed Central

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790

  2. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    PubMed

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  3. Heuristic approach to image registration

    NASA Astrophysics Data System (ADS)

    Gertner, Izidor; Maslov, Igor V.

    2000-08-01

    Image registration, i.e. correct mapping of images obtained from different sensor readings onto common reference frame, is a critical part of multi-sensor ATR/AOR systems based on readings from different types of sensors. In order to fuse two different sensor readings of the same object, the readings have to be put into a common coordinate system. This task can be formulated as optimization problem in a space of all possible affine transformations of an image. In this paper, a combination of heuristic methods is explored to register gray- scale images. The modification of Genetic Algorithm is used as the first step in global search for optimal transformation. It covers the entire search space with (randomly or heuristically) scattered probe points and helps significantly reduce the search space to a subspace of potentially most successful transformations. Due to its discrete character, however, Genetic Algorithm in general can not converge while coming close to the optimum. Its termination point can be specified either as some predefined number of generations or as achievement of a certain acceptable convergence level. To refine the search, potential optimal subspaces are searched using more delicate and efficient for local search Taboo and Simulated Annealing methods.

  4. Protein Loop Structure Prediction Using Conformational Space Annealing.

    PubMed

    Heo, Seungryong; Lee, Juyong; Joo, Keehyoung; Shin, Hang-Cheol; Lee, Jooyoung

    2017-05-22

    We have developed a protein loop structure prediction method by combining a new energy function, which we call E PLM (energy for protein loop modeling), with the conformational space annealing (CSA) global optimization algorithm. The energy function includes stereochemistry, dynamic fragment assembly, distance-scaled finite ideal gas reference (DFIRE), and generalized orientation- and distance-dependent terms. For the conformational search of loop structures, we used the CSA algorithm, which has been quite successful in dealing with various hard global optimization problems. We assessed the performance of E PLM with two widely used loop-decoy sets, Jacobson and RAPPER, and compared the results against the DFIRE potential. The accuracy of model selection from a pool of loop decoys as well as de novo loop modeling starting from randomly generated structures was examined separately. For the selection of a nativelike structure from a decoy set, E PLM was more accurate than DFIRE in the case of the Jacobson set and had similar accuracy in the case of the RAPPER set. In terms of sampling more nativelike loop structures, E PLM outperformed E DFIRE for both decoy sets. This new approach equipped with E PLM and CSA can serve as the state-of-the-art de novo loop modeling method.

  5. The GGOS Global Space Geodesy Network and its Evolution

    NASA Astrophysics Data System (ADS)

    Pearlman, M. R.; Pavlis, E. C.; Ma, C.; Noll, C. E.; Neilan, R. E.; Stowers, D. A.; Wetzel, S.

    2013-12-01

    The improvements in the reference frame and other space geodesy data products spelled out in the GGOS 2020 plan will evolve over time as new space geodesy sites enhance the global distribution of the network and new technologies are implemented at the sites thus enabling improved data processing and analysis. The goal of 30 globally distributed core sites with VLBI, SLR, GNSS and DORIS (where available) will take time to materialize. Co-location sites with less than the full core complement will continue to play a very important role in filling out the network while it is evolving and even after full implementation. GGOS through its Call for Participation, bi-lateral and multi-lateral discussions and work through the IAG Services has been encouraging current groups to upgrade and new groups to join the activity. Simulations examine the projected accuracy and stability of the network that would exist in five- and ten-years time, were the proposed expansion to fully materialize by then. Over the last year additional sites have joined the GGOS network, and ground techniques have continued to make progress in new technology systems. This talk will give an update on the current expansion of the global network and the projection for the network configuration that we forecast over the next 10 years.

  6. Closed Loop System Identification with Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.

    2004-01-01

    High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.

  7. Enhancing Polyhedral Relaxations for Global Optimization

    ERIC Educational Resources Information Center

    Bao, Xiaowei

    2009-01-01

    During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…

  8. An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids

    NASA Astrophysics Data System (ADS)

    Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun

    2017-07-01

    Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.

  9. Analysis and Optimization of Building Energy Consumption

    NASA Astrophysics Data System (ADS)

    Chuah, Jun Wei

    Energy is one of the most important resources required by modern human society. In 2010, energy expenditures represented 10% of global gross domestic product (GDP). By 2035, global energy consumption is expected to increase by more than 50% from current levels. The increased pace of global energy consumption leads to significant environmental and socioeconomic issues: (i) carbon emissions, from the burning of fossil fuels for energy, contribute to global warming, and (ii) increased energy expenditures lead to reduced standard of living. Efficient use of energy, through energy conservation measures, is an important step toward mitigating these effects. Residential and commercial buildings represent a prime target for energy conservation, comprising 21% of global energy consumption and 40% of the total energy consumption in the United States. This thesis describes techniques for the analysis and optimization of building energy consumption. The thesis focuses on building retrofits and building energy simulation as key areas in building energy optimization and analysis. The thesis first discusses and evaluates building-level renewable energy generation as a solution toward building energy optimization. The thesis next describes a novel heating system, called localized heating. Under localized heating, building occupants are heated individually by directed radiant heaters, resulting in a considerably reduced heated space and significant heating energy savings. To support localized heating, a minimally-intrusive indoor occupant positioning system is described. The thesis then discusses occupant-level sensing (OLS) as the next frontier in building energy optimization. OLS captures the exact environmental conditions faced by each building occupant, using sensors that are carried by all building occupants. The information provided by OLS enables fine-grained optimization for unprecedented levels of energy efficiency and occupant comfort. The thesis also describes a retrofit-oriented building energy simulator, ROBESim, that natively supports building retrofits. ROBESim extends existing building energy simulators by providing a platform for the analysis of novel retrofits, in addition to simulating existing retrofits. Using ROBESim, retrofits can be automatically applied to buildings, obviating the need for users to manually update building characteristics for comparisons between different building retrofits. ROBESim also includes several ease-of-use enhancements to support users of all experience levels.

  10. 78 FR 67132 - GPS Satellite Simulator Control Working Group Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-08

    ... DEPARTMENT OF DEFENSE Department of the Air Force GPS Satellite Simulator Control Working Group Meeting AGENCY: Space and Missile Systems Center, Global Positioning Systems (GPS) Directorate, Air Force... Control Working Group (SSCWG) meeting on 6 December 2013 from 0900-1300 PST at Los Angeles Air Force Base...

  11. 77 FR 70421 - GPS Satellite Simulator Control Working Group Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... DEPARTMENT OF DEFENSE Department of the Air Force GPS Satellite Simulator Control Working Group Meeting AGENCY: Space and Missile Systems Center, Global Positioning Systems (GPS) Directorate, Department... Control Working Group (SSCWG) meeting on 14 December 2012 from 0900-1600 PST at Los Angeles Air Force Base...

  12. 40 CFR 799.9370 - TSCA prenatal developmental toxicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... rabbit. (iv) Number of animals. Each test and control group shall contain a sufficient number of animals... optimal for spacing the dose levels, and the addition of a fourth test group is often preferable to using... group or a vehicle-control group if a vehicle is used in administering the test substance. (B) The...

  13. 40 CFR 799.9370 - TSCA prenatal developmental toxicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... rabbit. (iv) Number of animals. Each test and control group shall contain a sufficient number of animals... optimal for spacing the dose levels, and the addition of a fourth test group is often preferable to using... group or a vehicle-control group if a vehicle is used in administering the test substance. (B) The...

  14. 40 CFR 799.9370 - TSCA prenatal developmental toxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... rabbit. (iv) Number of animals. Each test and control group shall contain a sufficient number of animals... optimal for spacing the dose levels, and the addition of a fourth test group is often preferable to using... group or a vehicle-control group if a vehicle is used in administering the test substance. (B) The...

  15. 40 CFR 799.9370 - TSCA prenatal developmental toxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... rabbit. (iv) Number of animals. Each test and control group shall contain a sufficient number of animals... optimal for spacing the dose levels, and the addition of a fourth test group is often preferable to using... group or a vehicle-control group if a vehicle is used in administering the test substance. (B) The...

  16. 40 CFR 799.9370 - TSCA prenatal developmental toxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... rabbit. (iv) Number of animals. Each test and control group shall contain a sufficient number of animals... optimal for spacing the dose levels, and the addition of a fourth test group is often preferable to using... group or a vehicle-control group if a vehicle is used in administering the test substance. (B) The...

  17. TH-A-9A-02: BEST IN PHYSICS (THERAPY) - 4D IMRT Planning Using Highly- Parallelizable Particle Swarm Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modiri, A; Gu, X; Sawant, A

    2014-06-15

    Purpose: We present a particle swarm optimization (PSO)-based 4D IMRT planning technique designed for dynamic MLC tracking delivery to lung tumors. The key idea is to utilize the temporal dimension as an additional degree of freedom rather than a constraint in order to achieve improved sparing of organs at risk (OARs). Methods: The target and normal structures were manually contoured on each of the ten phases of a 4DCT scan acquired from a lung SBRT patient who exhibited 1.5cm tumor motion despite the use of abdominal compression. Corresponding ten IMRT plans were generated using the Eclipse treatment planning system. Thesemore » plans served as initial guess solutions for the PSO algorithm. Fluence weights were optimized over the entire solution space i.e., 10 phases × 12 beams × 166 control points. The size of the solution space motivated our choice of PSO, which is a highly parallelizable stochastic global optimization technique that is well-suited for such large problems. A summed fluence map was created using an in-house B-spline deformable image registration. Each plan was compared with a corresponding, internal target volume (ITV)-based IMRT plan. Results: The PSO 4D IMRT plan yielded comparable PTV coverage and significantly higher dose—sparing for parallel and serial OARs compared to the ITV-based plan. The dose-sparing achieved via PSO-4DIMRT was: lung Dmean = 28%; lung V20 = 90%; spinal cord Dmax = 23%; esophagus Dmax = 31%; heart Dmax = 51%; heart Dmean = 64%. Conclusion: Truly 4D IMRT that uses the temporal dimension as an additional degree of freedom can achieve significant dose sparing of serial and parallel OARs. Given the large solution space, PSO represents an attractive, parallelizable tool to achieve globally optimal solutions for such problems. This work was supported through funding from the National Institutes of Health and Varian Medical Systems. Amit Sawant has research funding from Varian Medical Systems, VisionRT Ltd. and Elekta.« less

  18. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.

  19. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  20. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  1. A global optimization algorithm inspired in the behavior of selfish herds.

    PubMed

    Fausto, Fernando; Cuevas, Erik; Valdivia, Arturo; González, Adrián

    2017-10-01

    In this paper, a novel swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is proposed for solving global optimization problems. SHO is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk. In SHO, individuals emulate the predatory interactions between groups of prey and predators by two types of search agents: the members of a selfish herd (the prey) and a pack of hungry predators. Depending on their classification as either a prey or a predator, each individual is conducted by a set of unique evolutionary operators inspired by such prey-predator relationship. These unique traits allow SHO to improve the balance between exploration and exploitation without altering the population size. To illustrate the proficiency and robustness of the proposed method, it is compared to other well-known evolutionary optimization approaches such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Differential Evolution (DE), Genetic Algorithms (GA), Crow Search Algorithm (CSA), Dragonfly Algorithm (DA), Moth-flame Optimization Algorithm (MOA) and Sine Cosine Algorithm (SCA). The comparison examines several standard benchmark functions, commonly considered within the literature of evolutionary algorithms. The experimental results show the remarkable performance of our proposed approach against those of the other compared methods, and as such SHO is proven to be an excellent alternative to solve global optimization problems. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Absence of gender effect on children with attention-deficit/hyperactivity disorder as assessed by optimized voxel-based morphometry.

    PubMed

    Yang, Pinchen; Wang, Pei-Ning; Chuang, Kai-Hsiang; Jong, Yuh-Jyh; Chao, Tzu-Cheng; Wu, Ming-Ting

    2008-12-30

    Brain abnormalities, as determined by structural magnetic resonance imaging (MRI), have been reported in patients with attention-deficit hyperactivity disorder (ADHD); however, female subjects have been underrepresented in previous reports. In this study, we used optimized voxel-based morphometry to compare the total and regional gray matter volumes between groups of 7- to 17-year-old ADHD and healthy children (total 114 subjects). Fifty-seven children with ADHD (n=57, 35 males and 22 females) and healthy children (n=57) received MRI scans. Segmented brain MRI images were normalized into standardized stereotactic space, modulated to allow volumetric analysis, smoothed and compared at the voxel level with statistical parametric mapping. Global volumetric comparisons between groups revealed that the total brain volumes of ADHD children were smaller than those of the control children. As for the regional brain analysis, the brain volumes of ADHD children were found to be bilaterally smaller in the following regions as compared with normal control values: the caudate nucleus and the cerebellum. There were two clusters of regional decrease in the female brain, left posterior cingulum and right precuneus, as compared with the male brain. Brain regions showing the interaction effect of diagnosis and gender were negligible. These results were consistent with the hypothesized dysfunctional systems in ADHD, and they also suggested that neuroanatomical abnormalities in ADHD were not influenced by gender.

  3. A generalized global alignment algorithm.

    PubMed

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  4. Alien Genetic Algorithm for Exploration of Search Space

    NASA Astrophysics Data System (ADS)

    Patel, Narendra; Padhiyar, Nitin

    2010-10-01

    Genetic Algorithm (GA) is a widely accepted population based stochastic optimization technique used for single and multi objective optimization problems. Various versions of modifications in GA have been proposed in last three decades mainly addressing two issues, namely increasing convergence rate and increasing probability of global minima. While both these. While addressing the first issue, GA tends to converge to a local optima and addressing the second issue corresponds the large computational efforts. Thus, to reduce the contradictory effects of these two aspects, we propose a modification in GA by adding an alien member in the population at every generation. Addition of an Alien member in the current population at every generation increases the probability of obtaining global minima at the same time maintaining higher convergence rate. With two test cases, we have demonstrated the efficacy of the proposed GA by comparing with the conventional GA.

  5. Towards a Global Hub and a Network for Collaborative Advancing of Space Weather Predictive Capabilities.

    NASA Astrophysics Data System (ADS)

    Kuznetsova, M. M.; Heynderickz, D.; Grande, M.; Opgenoorth, H. J.

    2017-12-01

    The COSPAR/ILWS roadmap on space weather published in 2015 (Advances in Space Research, 2015: DOI: 10.1016/j.asr.2015.03.023) prioritizes steps to be taken to advance understanding of space environment phenomena and to improve space weather forecasting capabilities. General recommendations include development of a comprehensive space environment specification, assessment of the state of the field on a 5-yr basis, standardization of meta-data and product metrics. To facilitate progress towards roadmap goals there is a need for a global hub for collaborative space weather capabilities assessment and development that brings together research, engineering, operational, educational, and end-user communities. The COSPAR Panel on Space Weather is aiming to build upon past progress and to facilitate coordination of established and new international space weather research and development initiatives. Keys to the success include creating flexible, collaborative, inclusive environment and engaging motivated groups and individuals committed to active participation in international multi-disciplinary teams focused on topics addressing emerging needs and challenges in the rapidly growing field of space weather. Near term focus includes comprehensive assessment of the state of the field and establishing an internationally recognized process to quantify and track progress over time, development of a global network of distributed web-based resources and interconnected interactive services required for space weather research, analysis, forecasting and education.

  6. Aerodynamic Shape Optimization Using Hybridized Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2003-01-01

    An aerodynamic shape optimization method that uses an evolutionary algorithm known at Differential Evolution (DE) in conjunction with various hybridization strategies is described. DE is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Various hybridization strategies for DE are explored, including the use of neural networks as well as traditional local search methods. A Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the hybrid DE optimizer. The method is implemented on distributed parallel computers so that new designs can be obtained within reasonable turnaround times. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. (The final paper will include at least one other aerodynamic design application). The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated.

  7. Optimal mode transformations for linear-optical cluster-state generation

    DOE PAGES

    Uskov, Dmitry B.; Lougovski, Pavel; Alsing, Paul M.; ...

    2015-06-15

    In this paper, we analyze the generation of linear-optical cluster states (LOCSs) via sequential addition of one and two qubits. Existing approaches employ the stochastic linear-optical two-qubit controlled-Z (CZ) gate with success rate of 1/9 per operation. The question of optimality of the CZ gate with respect to LOCS generation has remained open. We report that there are alternative schemes to the CZ gate that are exponentially more efficient and show that sequential LOCS growth is indeed globally optimal. We find that the optimal cluster growth operation is a state transformation on a subspace of the full Hilbert space. Finally,more » we show that the maximal success rate of postselected entangling n photonic qubits or m Bell pairs into a cluster is (1/2) n-1 and (1/4) m-1, respectively, with no ancilla photons, and we give an explicit optical description of the optimal mode transformations.« less

  8. Industry and Government Officials Meet for Space Weather Summit

    NASA Astrophysics Data System (ADS)

    Intriligator, Devrie S.

    2008-10-01

    Commercial airlines, electric power grids, cell phones, handheld Global Positioning Systems: Although the Sun is less active due to solar minimum, the number and types of situations and technologies that can benefit from up-to-date space weather information are growing. To address this, the second annual summit of the Commercial Space Weather Interest Group (CSWIG) and the National Oceanic and Atmospheric Administration's Space Weather Prediction Center (SWPC) was held on 1 May 2008 during Space Weather Workshop (SWW), in Boulder, Colo.

  9. Mutually unbiased bases and semi-definite programming

    NASA Astrophysics Data System (ADS)

    Brierley, Stephen; Weigert, Stefan

    2010-11-01

    A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Gröbner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.

  10. A lazy way to design infrared lens

    NASA Astrophysics Data System (ADS)

    Qiu, RongSheng; Wu, JianDong; Chen, LongJiang; Yu, Kun; Pang, HaoJun; Hu, BaiZhen

    2017-08-01

    We designed a compact middle-wave infrared (MWIR) lens with a large focal length ratio (about 1.5:1), used in the 3.7 to 4.8 μm range. The lens is consisted of a compact front group and a re-imaging group. Thanks to the compact front group configuration, it is possible to install a filter wheel mechanism in such a tight space. The total track length of the lens is about 50mm, which includes a 2mm thick protective window and a cold shield of 12mm. The full field of view of the lens is about 3.6°, and F number is less than 1.6, the image circle is about 4.6mm in diameter. The design performance of the lens reaches diffraction limitation, and doesn't change a lot during a temperature range of -40°C +60°C. This essay proposed a stepwise design method of infrared optical system guided by the qualitative approach. The method fully utilize the powerful global optimization ability, with a little effort to write code snippet in optical design software, frees optical engineer from tedious calculation of the original structure.

  11. 76 FR 35858 - Global Positioning System Directorate (GPSD); Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-20

    ...); Notice of Meeting ACTION: Notice of Meeting--Public Interface Control Working Group (ICWG) for Signals-in... published a meeting notice on the Public Interface Control Group (ICWG) on June 2, 2011 Vol. 76, No. 106... Interface Control Working Group (ICWG) meeting for the NAVSTAR GPS public signals in space (SiS) documents...

  12. Pricing of swing options: A Monte Carlo simulation approach

    NASA Astrophysics Data System (ADS)

    Leow, Kai-Siong

    We study the problem of pricing swing options, a class of multiple early exercise options that are traded in energy market, particularly in the electricity and natural gas markets. These contracts permit the option holder to periodically exercise the right to trade a variable amount of energy with a counterparty, subject to local volumetric constraints. In addition, the total amount of energy traded from settlement to expiration with the counterparty is restricted by a global volumetric constraint. Violation of this global volumetric constraint is allowed but would lead to penalty settled at expiration. The pricing problem is formulated as a stochastic optimal control problem in discrete time and state space. We present a stochastic dynamic programming algorithm which is based on piecewise linear concave approximation of value functions. This algorithm yields the value of the swing option under the assumption that the optimal exercise policy is applied by the option holder. We present a proof of an almost sure convergence that the algorithm generates the optimal exercise strategy as the number of iterations approaches to infinity. Finally, we provide a numerical example for pricing a natural gas swing call option.

  13. The Global Initiative for Children's Surgery: Optimal Resources for Improving Care.

    PubMed

    Goodman, Laura F; St-Louis, Etienne; Yousef, Yasmine; Cheung, Maija; Ure, Benno; Ozgediz, Doruk; Ameh, Emmanuel Adoyi; Bickler, Stephen; Poenaru, Dan; Oldham, Keith; Farmer, Diana; Lakhoo, Kokila

    2018-02-01

     The Lancet Commission on Global Surgery reported that 5 billion people lack access to safe, affordable surgical care. The majority of these people live in low-resource settings, where up to 50% of the population is children. The Disease Control Priorities (Debas HTP, Donkor A, Gawande DT, Jamison ME, Kruk, and Mock CN, editors. Essential Surgery. Disease Control Priorities. Third Edition, vol 1. Essential Surgery. Washington, DC: World Bank; 2015) on surgery included guidelines for the improvement of access to surgical care; however, these lack detail for children's surgery.  To produce guidance for low- and middle-income countries (LMICs) on the resources required for children's surgery at each level of hospital care.  The Global Initiative for Children's Surgery (GICS) held an inaugural meeting at the Royal College of Surgeons in London in May 2016, with 52 surgical providers from 21 countries, including 27 providers from 18 LMICs. Delegates engaged in working groups over 2 days to prioritize needs and solutions for optimizing children's surgical care; these were categorized into infrastructure, service delivery, training, and research. At a second GICS meeting in Washington in October 2016, 94 surgical care providers, half from LMICs, defined the optimal resources required at primary, secondary, tertiary, and national referral level through a series of working group engagements.  Consensus solutions for optimizing children's surgical care included the following: · Establishing standards and integrating them into national surgical plans.. · Each country should have at least one children's hospital.. · Designate, facilitate, and support regional training hubs covering all. · children's surgical specialties.. · Establish regional research support centers.. An "Optimal Resources" document was produced detailing the facilities and resources required at each level of care.  The Optimal Resources document has been produced by surgical providers from LMICs who have the greatest insight into the needs and priorities in their population. The document will be refined further through online GICS Working Groups and the World Health Organization for broad application to ensure all children have timely access to safe surgical care. Georg Thieme Verlag KG Stuttgart · New York.

  14. A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2008-01-01

    An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent

  15. Discrete Optimization in Chemical Space Reference Manual

    DTIC Science & Technology

    2012-10-01

    ChemGroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 6.3 vanilla -rings.inp...Examples: carbazoles.inp, and vanilla -rings.inp. 4.8.2 Constructor & Destructor Documentation 4.8.2.1 ChemGroup::ChemGroup () 4.8.2.2 ChemGroup::ChemGroup...also: carbazoles.inp and vanilla -rings.inp in the examples section. Read the connector. Read the connector. 4.9.2.6 ChemIdent::ChemIdent (istream & in

  16. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  17. An Integrated Framework for Parameter-based Optimization of Scientific Workflows.

    PubMed

    Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.

  18. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  19. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  20. Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point.

    PubMed

    Sánchez, Joan-Pau; McInnes, Colin R

    2015-01-01

    Within the context of anthropogenic climate change, but also considering the Earth's natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth's radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth's climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes.

  1. Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point

    PubMed Central

    Sánchez, Joan-Pau; McInnes, Colin R.

    2015-01-01

    Within the context of anthropogenic climate change, but also considering the Earth’s natural climate variability, this paper explores the speculative possibility of large-scale active control of the Earth’s radiative forcing. In particular, the paper revisits the concept of deploying a large sunshade or occulting disk at a static position near the Sun-Earth L1 Lagrange equilibrium point. Among the solar radiation management methods that have been proposed thus far, space-based concepts are generally seen as the least timely, albeit also as one of the most efficient. Large occulting structures could potentially offset all of the global mean temperature increase due to greenhouse gas emissions. This paper investigates optimal configurations of orbiting occulting disks that not only offset a global temperature increase, but also mitigate regional differences such as latitudinal and seasonal difference of monthly mean temperature. A globally resolved energy balance model is used to provide insights into the coupling between the motion of the occulting disks and the Earth’s climate. This allows us to revise previous studies, but also, for the first time, to search for families of orbits that improve the efficiency of occulting disks at offsetting climate change on both global and regional scales. Although natural orbits exist near the L1 equilibrium point, their period does not match that required for geoengineering purposes, thus forced orbits were designed that require small changes to the disk attitude in order to control its motion. Finally, configurations of two occulting disks are presented which provide the same shading area as previously published studies, but achieve reductions of residual latitudinal and seasonal temperature changes. PMID:26309047

  2. Signal Strength-Based Global Navigation Satellite System Performance Assessment in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user in the Space Service Volume (SSV) when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The second phase of that increasing complexity and fidelity analysis initiative is based on augmenting the Phase 1 pure geometrical approach with signal strength-based limitations to determine if access is valid. The second phase of analysis has been completed, and the results are documented in this paper.

  3. [Visual and motor functions in schizophrenic patients].

    PubMed

    Del Vecchio, S; Gargiulo, P A

    1992-12-01

    In the present work, visual and motor functions have been explored in 26 chronic schizophrenic patients, and 7 acute schizophrenic patients, compared with 26 normal controls, by means of the Bender-Gestalt Test. Parameters under consideration were: Form distortion, rotation, integration, perseveration, use of space, subtle motricity, score (global parameter), and time employed. As regards distortion and rotation there have been highly significant differences between chronic patients and control group. Among acute patients, it was observed that perseveration was also highly significant. Conversely, integration and use of space did not differ significantly among the three groups involved. The global score, resulting from all the above mentioned parameters showed important differences between both patient groups on the one hand, and control group on the other hand. Taking into account that patients were being administered neuroleptic drugs, it can safely be said, however, that the Bender-Gestalt Test allows to recognize alteration in perceptual closure consistent with a loss of the objective structure of perceived phenomena, in both chronic and acute patients.

  4. Three-level global resource allocation model for hiv control: A hierarchical decision system approach.

    PubMed

    Kassa, Semu Mitiku

    2018-02-01

    Funds from various global organizations, such as, The Global Fund, The World Bank, etc. are not directly distributed to the targeted risk groups. Especially in the so-called third-world-countries, the major part of the fund in HIV prevention programs comes from these global funding organizations. The allocations of these funds usually pass through several levels of decision making bodies that have their own specific parameters to control and specific objectives to achieve. However, these decisions are made mostly in a heuristic manner and this may lead to a non-optimal allocation of the scarce resources. In this paper, a hierarchical mathematical optimization model is proposed to solve such a problem. Combining existing epidemiological models with the kind of interventions being on practice, a 3-level hierarchical decision making model in optimally allocating such resources has been developed and analyzed. When the impact of antiretroviral therapy (ART) is included in the model, it has been shown that the objective function of the lower level decision making structure is a non-convex minimization problem in the allocation variables even if all the production functions for the intervention programs are assumed to be linear.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas

    This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less

  6. Adaptive feature selection using v-shaped binary particle swarm optimization.

    PubMed

    Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.

  7. Adaptive feature selection using v-shaped binary particle swarm optimization

    PubMed Central

    Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850

  8. Autonomous interplanetary constellation design

    NASA Astrophysics Data System (ADS)

    Chow, Cornelius Channing, II

    According to NASA's integrated space technology roadmaps, space-based infrastructures are envisioned as necessary ingredients to a sustained effort in continuing space exploration. Whether it be for extra-terrestrial habitats, roving/cargo vehicles, or space tourism, autonomous space networks will provide a vital communications lifeline for both future robotic and human missions alike. Projecting that the Moon will be a bustling hub of activity within a few decades, a near-term opportunity for in-situ infrastructure development is within reach. This dissertation addresses the anticipated need for in-space infrastructure by investigating a general design methodology for autonomous interplanetary constellations; to illustrate the theory, this manuscript presents results from an application to the Earth-Moon neighborhood. The constellation design methodology is formulated as an optimization problem, involving a trajectory design step followed by a spacecraft placement sequence. Modeling the dynamics as a restricted 3-body problem, the investigated design space consists of families of periodic orbits which play host to the constellations, punctuated by arrangements of spacecraft autonomously guided by a navigation strategy called LiAISON (Linked Autonomous Interplanetary Satellite Orbit Navigation). Instead of more traditional exhaustive search methods, a numerical continuation approach is implemented to map the admissible configuration space. In particular, Keller's pseudo-arclength technique is used to follow folding/bifurcating solution manifolds, which are otherwise inaccessible with other parameter continuation schemes. A succinct characterization of the underlying structure of the local, as well as global, extrema is thus achievable with little a priori intuition of the solution space. Furthermore, the proposed design methodology offers benefits in computation speed plus the ability to handle mildly stochastic systems. An application of the constellation design methodology to the restricted Earth-Moon system, reveals optimal pairwise configurations for various L1, L2, and L5 (halo, axial, and vertical) periodic orbit families. Navigation accuracies, ranging from O (10+/-1) meters in position space, are obtained for the optimal Earth-Moon constellations, given measurement noise on the order of 1 meter.

  9. Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour

    PubMed Central

    Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad

    2013-01-01

    A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843

  10. Characterizing the optimal flux space of genome-scale metabolic reconstructions through modified latin-hypercube sampling.

    PubMed

    Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek

    2016-03-01

    Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.

  11. Improving spacecraft design using a multidisciplinary design optimization methodology

    NASA Astrophysics Data System (ADS)

    Mosher, Todd Jon

    2000-10-01

    Spacecraft design has gone from maximizing performance under technology constraints to minimizing cost under performance constraints. This is characteristic of the "faster, better, cheaper" movement that has emerged within NASA. Currently spacecraft are "optimized" manually through a tool-assisted evaluation of a limited set of design alternatives. With this approach there is no guarantee that a systems-level focus will be taken and "feasibility" rather than "optimality" is commonly all that is achieved. To improve spacecraft design in the "faster, better, cheaper" era, a new approach using multidisciplinary design optimization (MDO) is proposed. Using MDO methods brings structure to conceptual spacecraft design by casting a spacecraft design problem into an optimization framework. Then, through the construction of a model that captures design and cost, this approach facilitates a quicker and more straightforward option synthesis. The final step is to automatically search the design space. As computer processor speed continues to increase, enumeration of all combinations, while not elegant, is one method that is straightforward to perform. As an alternative to enumeration, genetic algorithms are used and find solutions by reviewing fewer possible solutions with some limitations. Both methods increase the likelihood of finding an optimal design, or at least the most promising area of the design space. This spacecraft design methodology using MDO is demonstrated on three examples. A retrospective test for validation is performed using the Near Earth Asteroid Rendezvous (NEAR) spacecraft design. For the second example, the premise that aerobraking was needed to minimize mission cost and was mission enabling for the Mars Global Surveyor (MGS) mission is challenged. While one might expect no feasible design space for an MGS without aerobraking mission, a counterintuitive result is discovered. Several design options that don't use aerobraking are feasible and cost effective. The third example is an original commercial lunar mission entitled Eagle-eye. This example shows how an MDO approach is applied to an original mission with a larger feasible design space. It also incorporates a simplified business case analysis.

  12. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  13. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1973-01-01

    A collection of typical three-body trajectories from the L1 libration point on the sun-earth line to the earth is presented. These trajectories in the sun-earth system are grouped into four distinct families which differ in transfer time and delta V requirements. Curves showing the variations of delta V with respect to transfer time, and typical two and three-impulse primer vector histories, are included. The development of a four-body trajectory optimization program to compute fuel optimal trajectories between the earth and a point in the sun-earth-moon system are also discussed. Methods for generating fuel optimal two-impulse trajectories which originate at the earth or a point in space, and fuel optimal three-impulse trajectories between two points in space, are presented. A brief qualitative comparison of these methods is given. An example of a four-body two-impulse transfer from the Li libration point to the earth is included.

  14. An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach

    NASA Astrophysics Data System (ADS)

    Chu, A.; Ogata, Y.; Katsura, K.

    2010-12-01

    For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.

  15. NASA's Contribution to Global Space Geodesy Networks

    NASA Technical Reports Server (NTRS)

    Bosworth, John M.

    1999-01-01

    The NASA Space Geodesy program continues to be a major provider of space geodetic data for the international earth science community. NASA operates high performance Satellite Laser Ranging (SLR), Very Long Baseline Interferometry (VLBI) and Global Positioning System (GPS) ground receivers at well over 30 locations around the world and works in close cooperation with space geodetic observatories around the world. NASA has also always been at the forefront in the quest for technical improvement and innovation in the space geodesy technologies to make them even more productive, accurate and economical. This presentation will highlight the current status of NASA's networks; the plans for partnerships with international groups in the southern hemisphere to improve the geographic distribution of space geodesy sites and the status of the technological improvements in SLR and VLBI that will support the new scientific thrusts proposed by interdisciplinary earth scientists. In addition, the expanding role of the NASA Space geodesy data archive, the CDDIS will be described.

  16. Space Station Habitability Research

    NASA Technical Reports Server (NTRS)

    Clearwater, Yvonne A.

    1988-01-01

    The purpose and scope of the Habitability Research Group within the Space Human Factors Office at the NASA/Ames Research Center is described. Both near-term and long-term research objectives in the space human factors program pertaining to the U.S. manned Space Station are introduced. The concept of habitability and its relevancy to the U.S. space program is defined within a historical context. The relationship of habitability research to the optimization of environmental and operational determinants of productivity is discussed. Ongoing habitability research efforts pertaining to living and working on the Space Station are described.

  17. Space Station habitability research

    NASA Technical Reports Server (NTRS)

    Clearwater, Y. A.

    1986-01-01

    The purpose and scope of the Habitability Research Group within the Space Human Factors Office at the NASA/Ames Research Cente is described. Both near-term and long-term research objectives in the space human factors program pertaining to the U.S. manned Space Station are introduced. The concept of habitability and its relevancy to the U.S. space program is defined within a historical context. The relationship of habitability research to the optimization of environmental and operational determinants of productivity is discussed. Ongoing habitability research efforts pertaining to living and working on the Space Station are described.

  18. Space Station habitability research.

    PubMed

    Clearwater, Y A

    1988-02-01

    The purpose and scope of the Habitability Research Group within the Space Human Factors Office at the NASA/Ames Research Center is described. Both near-term and long-term research objectives in the space human factors program pertaining to the U.S. manned Space Station are introduced. The concept of habitability and its relevancy to the U.S. space program is defined within a historical context. The relationship of habitability research to the optimization of environmental and operational determinants of productivity is discussed. Ongoing habitability research efforts pertaining to living and working on the Space Station are described.

  19. Prediction of molecular crystal structures by a crystallographic QM/MM model with full space-group symmetry.

    PubMed

    Mörschel, Philipp; Schmidt, Martin U

    2015-01-01

    A crystallographic quantum-mechanical/molecular-mechanical model (c-QM/MM model) with full space-group symmetry has been developed for molecular crystals. The lattice energy was calculated by quantum-mechanical methods for short-range interactions and force-field methods for long-range interactions. The quantum-mechanical calculations covered the interactions within the molecule and the interactions of a reference molecule with each of the surrounding 12-15 molecules. The interactions with all other molecules were treated by force-field methods. In each optimization step the energies in the QM and MM shells were calculated separately as single-point energies; after adding both energy contributions, the crystal structure (including the lattice parameters) was optimized accordingly. The space-group symmetry was maintained throughout. Crystal structures with more than one molecule per asymmetric unit, e.g. structures with Z' = 2, hydrates and solvates, have been optimized as well. Test calculations with different quantum-mechanical methods on nine small organic molecules revealed that the density functional theory methods with dispersion correction using the B97-D functional with 6-31G* basis set in combination with the DREIDING force field reproduced the experimental crystal structures with good accuracy. Subsequently the c-QM/MM method was applied to nine compounds from the CCDC blind tests resulting in good energy rankings and excellent geometric accuracies.

  20. Birth Spacing of Pregnant Women in Nepal: A Community-Based Study.

    PubMed

    Karkee, Rajendra; Lee, Andy H

    2016-01-01

    Optimal birth spacing has health advantages for both mother and child. In developing countries, shorter birth intervals are common and associated with social, cultural, and economic factors, as well as a lack of family planning. This study investigated the first birth interval after marriage and preceding interbirth interval in Nepal. A community-based prospective cohort study was conducted in the Kaski district of Nepal. Information on birth spacing, demographic, and obstetric characteristics was obtained from 701 pregnant women using a structured questionnaire. Logistic regression analyses were performed to ascertain factors associated with short birth spacing. About 39% of primiparous women gave their first child birth within 1 year of marriage and 23% of multiparous women had short preceding interbirth intervals (<24 months). The average birth spacing among the multiparous group was 44.9 (SD 21.8) months. Overall, short birth spacing appeared to be inversely associated with advancing maternal age. For the multiparous group, Janajati and lower caste women, and those whose newborn was female, were more likely to have short birth spacing. The preceding interbirth interval was relatively long in the Kaski district of Nepal and tended to be associated with maternal age, caste, and sex of newborn infant. Optimal birth spacing programs should target Janajati and lower caste women, along with promotion of gender equality in society.

  1. Efficient droplet router for digital microfluidic biochip using particle swarm optimizer

    NASA Astrophysics Data System (ADS)

    Pan, Indrajit; Samanta, Tuhina

    2013-01-01

    Digital Microfluidic Biochip has emerged as a revolutionary finding in the field of micro-electromechanical research. Different complex bioassays and pathological analysis are being efficiently performed on this miniaturized chip with negligible amount of sample specimens. Initially biochip was invented on continuous-fluid-flow mechanism but later it has evolved with more efficient concept of digital-fluid-flow. These second generation biochips are capable of serving more complex bioassays. This operational change in biochip technology emerged with the requirement of high end computer aided design needs for physical design automation. The change also paved new avenues of research to assist the proficient design automation. Droplet routing is one of those major aspects where it necessarily requires minimization of both routing completion time and total electrode usage. This task involves optimization of multiple associated parameters. In this paper we have proposed a particle swarm optimization based approach for droplet outing. The process mainly operates in two phases where initially we perform clustering of state space and classification of nets into designated clusters. This helps us to reduce solution space by redefining local sub optimal target in the interleaved space between source and global target of a net. In the next phase we resolve the concurrent routing issues of every sub optimal situation to generate final routing schedule. The method was applied on some standard test benches and hard test sets. Comparative analysis of experimental results shows good improvement on the aspect of unit cell usage, routing completion time and execution time over some well existing methods.

  2. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  3. Optimization of High-Dimensional Functions through Hypercube Evaluation

    PubMed Central

    Abiyev, Rahib H.; Tunay, Mustafa

    2015-01-01

    A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237

  4. A Real-Time Greedy-Index Dispatching Policy for using PEVs to Provide Frequency Regulation Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Xinda; Wu, Di; Lu, Ning

    This article presents a real-time greedy-index dispatching policy (GIDP) for using plug-in electric vehicles (PEVs) to provide frequency regulation services. A new service cost allocation mechanism is proposed to award PEVs based on the amount of service they provided, while considering compensations for delayed-charging and reduction of battery lifetime due to participation of the service. The GIDP transforms the optimal dispatch problem from a high-dimensional space into a one-dimensional space while preserving the solution optimality. When solving the transformed problem in real-time, the global optimality of the GIDP solution can be guaranteed by mathematically proved “indexability”. Because the GIDP indexmore » can be calculated upon the PEV’s arrival and used for the entire decision making process till its departure, the computational burden is minimized and the complexity of the aggregator dispatch process is significantly reduced. Finally, simulation results are used to evaluate the proposed GIDP, and to demonstrate the potential profitability from providing frequency regulation service by using PEVs.« less

  5. A Real-Time Greedy-Index Dispatching Policy for using PEVs to Provide Frequency Regulation Service

    DOE PAGES

    Ke, Xinda; Wu, Di; Lu, Ning

    2017-09-18

    This article presents a real-time greedy-index dispatching policy (GIDP) for using plug-in electric vehicles (PEVs) to provide frequency regulation services. A new service cost allocation mechanism is proposed to award PEVs based on the amount of service they provided, while considering compensations for delayed-charging and reduction of battery lifetime due to participation of the service. The GIDP transforms the optimal dispatch problem from a high-dimensional space into a one-dimensional space while preserving the solution optimality. When solving the transformed problem in real-time, the global optimality of the GIDP solution can be guaranteed by mathematically proved “indexability”. Because the GIDP indexmore » can be calculated upon the PEV’s arrival and used for the entire decision making process till its departure, the computational burden is minimized and the complexity of the aggregator dispatch process is significantly reduced. Finally, simulation results are used to evaluate the proposed GIDP, and to demonstrate the potential profitability from providing frequency regulation service by using PEVs.« less

  6. Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon

    Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.

  7. Modeling and Optimization of Renewable and Hybrid Fuel Cell Systems for Space Power and Propulsion

    DTIC Science & Technology

    2010-11-14

    For that the project achieved: the optimization of SOFC and PEMFC internal structure and external shape under a volume constraint; an initial set of...subcomponent models for regenerative, renewable fuel cell system (RFC); the integration of PEMFC into RFC systems were developed; power electronic...with the same objectives and goals but using a PEMFC regenerative system instead. This research group studied and published on the optimization and

  8. Optimal message log reclamation for independent checkpointing

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min; Fuchs, W. Kent

    1993-01-01

    Independent (uncoordinated) check pointing for parallel and distributed systems allows maximum process autonomy but suffers from possible domino effects and the associated storage space overhead for maintaining multiple checkpoints and message logs. In most research on check pointing and recovery, it was assumed that only the checkpoints and message logs older than the global recovery line can be discarded. It is shown how recovery line transformation and decomposition can be applied to the problem of efficiently identifying all discardable message logs, thereby achieving optimal garbage collection. Communication trace-driven simulation for several parallel programs is used to show the benefits of the proposed algorithm for message log reclamation.

  9. An Investigation of Generalized Differential Evolution Metaheuristic for Multiobjective Optimal Crop-Mix Planning Decision

    PubMed Central

    Olugbara, Oludayo

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369

  10. An investigation of generalized differential evolution metaheuristic for multiobjective optimal crop-mix planning decision.

    PubMed

    Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.

  11. Optimizing the Attitude Control of Small Satellite Constellations for Rapid Response Imaging

    NASA Astrophysics Data System (ADS)

    Nag, S.; Li, A.

    2016-12-01

    Distributed Space Missions (DSMs) such as formation flight and constellations, are being recognized as important solutions to increase measurement samples over space and time. Given the increasingly accurate attitude control systems emerging in the commercial market, small spacecraft now have the ability to slew and point within few minutes of notice. In spite of hardware development in CubeSats at the payload (e.g. NASA InVEST) and subsystems (e.g. Blue Canyon Technologies), software development for tradespace analysis in constellation design (e.g. Goddard's TAT-C), planning and scheduling development in single spacecraft (e.g. GEO-CAPE) and aerial flight path optimizations for UAVs (e.g. NASA Sensor Web), there is a gap in open-source, open-access software tools for planning and scheduling distributed satellite operations in terms of pointing and observing targets. This paper will demonstrate results from a tool being developed for scheduling pointing operations of narrow field-of-view (FOV) sensors over mission lifetime to maximize metrics such as global coverage and revisit statistics. Past research has shown the need for at least fourteen satellites to cover the Earth globally everyday using a LandSat-like sensor. Increasing the FOV three times reduces the need to four satellites, however adds image distortion and BRDF complexities to the observed reflectance. If narrow FOV sensors on a small satellite constellation were commanded using robust algorithms to slew their sensor dynamically, they would be able to coordinately cover the global landmass much faster without compensating for spatial resolution or BRDF effects. Our algorithm to optimize constellation satellite pointing is based on a dynamic programming approach under the constraints of orbital mechanics and existing attitude control systems for small satellites. As a case study for our algorithm, we minimize the time required to cover the 17000 Landsat images with maximum signal to noise ratio fall-off and minimum image distortion among the satellites, using Landsat's specifications. Attitude-specific constraints such as power consumption, response time, and stability were factored into the optimality computations. The algorithm can integrate cloud cover predictions, specific ground and air assets and angular constraints.

  12. Are quantitative sensitivity analysis methods always reliable?

    NASA Astrophysics Data System (ADS)

    Huang, X.

    2016-12-01

    Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.

  13. Adaptive Grouping Cloud Model Shuffled Frog Leaping Algorithm for Solving Continuous Optimization Problems

    PubMed Central

    Liu, Haorui; Yi, Fengyan; Yang, Heli

    2016-01-01

    The shuffled frog leaping algorithm (SFLA) easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the “elite strategy” to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion. PMID:26819584

  14. Estimating of aquifer parameters from the single-well water-level measurements in response to advancing longwall mine by using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Karaman, Abdullah

    2017-04-01

    We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.

  15. Protein structure modeling for CASP10 by multiple layers of global optimization.

    PubMed

    Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2014-02-01

    In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.

  16. System Risk Assessment and Allocation in Conceptual Design

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.

  17. Very Similar Spacing-Effect Patterns in Very Different Learning/Practice Domains

    PubMed Central

    Kornmeier, Jürgen; Spitzer, Manfred; Sosic-Vasic, Zrinka

    2014-01-01

    Temporally distributed (“spaced”) learning can be twice as efficient as massed learning. This “spacing effect” occurs with a broad spectrum of learning materials, with humans of different ages, with non-human vertebrates and also invertebrates. This indicates, that very basic learning mechanisms are at work (“generality”). Although most studies so far focused on very narrow spacing interval ranges, there is some evidence for a non-monotonic behavior of this “spacing effect” (“nonlinearity”) with optimal spacing intervals at different time scales. In the current study we focused both the nonlinearity aspect by using a broad range of spacing intervals and the generality aspect by using very different learning/practice domains: Participants learned German-Japanese word pairs and performed visual acuity tests. For each of six groups we used a different spacing interval between learning/practice units from 7 min to 24 h in logarithmic steps. Memory retention was studied in three consecutive final tests, one, seven and 28 days after the final learning unit. For both the vocabulary learning and visual acuity performance we found a highly significant effect of the factor spacing interval on the final test performance. In the 12 h-spacing-group about 85% of the learned words stayed in memory and nearly all of the visual acuity gain was preserved. In the 24 h-spacing-group, in contrast, only about 33% of the learned words were retained and the visual acuity gain dropped to zero. The very similar patterns of results from the two very different learning/practice domains point to similar underlying mechanisms. Further, our results indicate spacing in the range of 12 hours as optimal. A second peak may be around a spacing interval of 20 min but here the data are less clear. We discuss relations between our results and basic learning at the neuronal level. PMID:24609081

  18. Phase Transition in Protocols Minimizing Work Fluctuations

    NASA Astrophysics Data System (ADS)

    Solon, Alexandre P.; Horowitz, Jordan M.

    2018-05-01

    For two canonical examples of driven mesoscopic systems—a harmonically trapped Brownian particle and a quantum dot—we numerically determine the finite-time protocols that optimize the compromise between the standard deviation and the mean of the dissipated work. In the case of the oscillator, we observe a collection of protocols that smoothly trade off between average work and its fluctuations. However, for the quantum dot, we find that as we shift the weight of our optimization objective from average work to work standard deviation, there is an analog of a first-order phase transition in protocol space: two distinct protocols exchange global optimality with mixed protocols akin to phase coexistence. As a result, the two types of protocols possess qualitatively different properties and remain distinct even in the infinite duration limit: optimal-work-fluctuation protocols never coalesce with the minimal-work protocols, which therefore never become quasistatic.

  19. KSC-2011-3140

    NASA Image and Video Library

    2011-04-27

    CAPE CANAVERAL, Fla. -- In the Press Site bull pen at NASA's Kennedy Space Center in Florida, The LEGO Group's Daire McCabe and NASA's Associate Administrator for Education Leland Melvin talk about the LEGO sets going up to the International Space Station aboard space shuttle Endeavour's STS-134 mission. NASA and The LEGO Group will send 23 LEGO sets to the station and some of those sets include a space shuttle, an ISS model, a Global Positioning Satellite and NASA's Hubble Space Telescope. The sets will be used for NASA's Teaching From Space Project, which is part of a three-year Space Act Agreement with the toy maker to spark the interest of children in science, technology, engineering and mathematics (STEM). Liftoff is scheduled for April 29 at 3:47 p.m. EDT. This will be the final spaceflight for Endeavour. For more information visit, www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts134/index.html. Photo credit: NASA/Frankie Martin

  20. KSC-2011-3141

    NASA Image and Video Library

    2011-04-27

    CAPE CANAVERAL, Fla. -- In the Press Site bull pen at NASA's Kennedy Space Center in Florida, The LEGO Group's Daire McCabe and NASA's Associate Administrator for Education Leland Melvin talk about the LEGO sets going up to the International Space Station aboard space shuttle Endeavour's STS-134 mission. NASA and The LEGO Group will send 23 LEGO sets to the station and some of those sets include a space shuttle, an ISS model, a Global Positioning Satellite and NASA's Hubble Space Telescope. The sets will be used for NASA's Teaching From Space Project, which is part of a three-year Space Act Agreement with the toy maker to spark the interest of children in science, technology, engineering and mathematics (STEM). Liftoff is scheduled for April 29 at 3:47 p.m. EDT. This will be the final spaceflight for Endeavour. For more information visit, www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts134/index.html. Photo credit: NASA/Frankie Martin

  1. KSC-2011-3139

    NASA Image and Video Library

    2011-04-27

    CAPE CANAVERAL, Fla. -- In the Press Site bull pen at NASA's Kennedy Space Center in Florida, NASA Education Specialist Teresa Sindelar and The LEGO Group's Daire McCabe talk about the LEGO sets going up to the International Space Station aboard space shuttle Endeavour's STS-134 mission. NASA and The LEGO Group will send 23 LEGO sets to the station and some of those sets include a space shuttle, an ISS model, a Global Positioning Satellite and NASA's Hubble Space Telescope. The sets will be used for NASA's Teaching From Space Project, which is part of a three-year Space Act Agreement with the toy maker to spark the interest of children in science, technology, engineering and mathematics (STEM). Liftoff is scheduled for April 29 at 3:47 p.m. EDT. This will be the final spaceflight for Endeavour. For more information visit, www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts134/index.html. Photo credit: NASA/Frankie Martin

  2. A spatially informative optic flow model of bee colony with saccadic flight strategy for global optimization.

    PubMed

    Das, Swagatam; Biswas, Subhodip; Panigrahi, Bijaya K; Kundu, Souvik; Basu, Debabrota

    2014-10-01

    This paper presents a novel search metaheuristic inspired from the physical interpretation of the optic flow of information in honeybees about the spatial surroundings that help them orient themselves and navigate through search space while foraging. The interpreted behavior combined with the minimal foraging is simulated by the artificial bee colony algorithm to develop a robust search technique that exhibits elevated performance in multidimensional objective space. Through detailed experimental study and rigorous analysis, we highlight the statistical superiority enjoyed by our algorithm over a wide variety of functions as compared to some highly competitive state-of-the-art methods.

  3. Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey

    1995-01-01

    An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.

  4. The Camassa-Holm equation as an incompressible Euler equation: A geometric point of view

    NASA Astrophysics Data System (ADS)

    Gallouët, Thomas; Vialard, François-Xavier

    2018-04-01

    The group of diffeomorphisms of a compact manifold endowed with the L2 metric acting on the space of probability densities gives a unifying framework for the incompressible Euler equation and the theory of optimal mass transport. Recently, several authors have extended optimal transport to the space of positive Radon measures where the Wasserstein-Fisher-Rao distance is a natural extension of the classical L2-Wasserstein distance. In this paper, we show a similar relation between this unbalanced optimal transport problem and the Hdiv right-invariant metric on the group of diffeomorphisms, which corresponds to the Camassa-Holm (CH) equation in one dimension. Geometrically, we present an isometric embedding of the group of diffeomorphisms endowed with this right-invariant metric in the automorphisms group of the fiber bundle of half densities endowed with an L2 type of cone metric. This leads to a new formulation of the (generalized) CH equation as a geodesic equation on an isotropy subgroup of this automorphisms group; On S1, solutions to the standard CH thus give radially 1-homogeneous solutions of the incompressible Euler equation on R2 which preserves a radial density that has a singularity at 0. An other application consists in proving that smooth solutions of the Euler-Arnold equation for the Hdiv right-invariant metric are length minimizing geodesics for sufficiently short times.

  5. Group theoretical formulation of free fall and projectile motion

    NASA Astrophysics Data System (ADS)

    Düztaş, Koray

    2018-07-01

    In this work we formulate the group theoretical description of free fall and projectile motion. We show that the kinematic equations for constant acceleration form a one parameter group acting on a phase space. We define the group elements ϕ t by their action on the points in the phase space. We also generalize this approach to projectile motion. We evaluate the group orbits regarding their relations to the physical orbits of particles and unphysical solutions. We note that the group theoretical formulation does not apply to more general cases involving a time-dependent acceleration. This method improves our understanding of the constant acceleration problem with its global approach. It is especially beneficial for students who want to pursue a career in theoretical physics.

  6. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  7. Simultaneous stabilization of global temperature and precipitation through cocktail geoengineering

    NASA Astrophysics Data System (ADS)

    Cao, Long; Duan, Lei; Bala, Govindasamy; Caldeira, Ken

    2017-07-01

    Solar geoengineering has been proposed as a backup plan to offset some aspects of anthropogenic climate change if timely CO2 emission reductions fail to materialize. Modeling studies have shown that there are trade-offs between changes in temperature and hydrological cycle in response to solar geoengineering. Here we investigate the possibility of stabilizing both global mean temperature and precipitation simultaneously by combining two geoengineering approaches: stratospheric sulfate aerosol increase (SAI) that deflects sunlight to space and cirrus cloud thinning (CCT) that enables more longwave radiation to escape to space. Using the slab ocean configuration of National Center for Atmospheric Research Community Earth System Model, we simulate SAI by uniformly adding sulfate aerosol in the upper stratosphere and CCT by uniformly increasing cirrus cloud ice particle falling speed. Under an idealized warming scenario of abrupt quadrupling of atmospheric CO2, we show that by combining appropriate amounts of SAI and CCT geoengineering, global mean (or land mean) temperature and precipitation can be restored simultaneously to preindustrial levels. However, compared to SAI, cocktail geoengineering by mixing SAI and CCT does not markedly improve the overall similarity between geoengineered climate and preindustrial climate on regional scales. Some optimal spatially nonuniform mixture of SAI with CCT might have the potential to better mitigate climate change at both the global and regional scales.

  8. An Integrated Method for Airfoil Optimization

    NASA Astrophysics Data System (ADS)

    Okrent, Joshua B.

    Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal operational conditions from a broad design space with the use of minimal computational resources on both an absolute and relative scale to traditional analysis techniques. Aerodynamicists, program managers, aircraft configuration specialist, and anyone else in charge of aircraft configuration, design studies, and program level decisions might find the evaluation and optimization method proposed of interest.

  9. Berry phases for Landau Hamiltonians on deformed tori

    NASA Astrophysics Data System (ADS)

    Lévay, Péter

    1995-06-01

    Parametrized families of Landau Hamiltonians are introduced, where the parameter space is the Teichmüller space (topologically the complex upper half plane) corresponding to deformations of tori. The underlying SO(2,1) symmetry of the families enables an explicit calculation of the Berry phases picked up by the eigenstates when the torus is slowly deformed. It is also shown that apart from these phases that are local in origin, there are global non-Abelian ones too, related to the hidden discrete symmetry group Γϑ (the theta group, which is a subgroup of the modular group) of the families. The induced Riemannian structure on the parameter space is the usual Poincare metric on the upper half plane of constant negative curvature. Due to the discrete symmetry Γϑ the geodesic motion restricted to the fundamental domain of this group is chaotic.

  10. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    NASA Astrophysics Data System (ADS)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  11. Weighted augmented Jacobian matrix with a variable coefficient method for kinematics mapping of space teleoperation based on human-robot motion similarity

    NASA Astrophysics Data System (ADS)

    Shi, Zhong; Huang, Xuexiang; Hu, Tianjian; Tan, Qian; Hou, Yuzhuo

    2016-10-01

    Space teleoperation is an important space technology, and human-robot motion similarity can improve the flexibility and intuition of space teleoperation. This paper aims to obtain an appropriate kinematics mapping method of coupled Cartesian-joint space for space teleoperation. First, the coupled Cartesian-joint similarity principles concerning kinematics differences are defined. Then, a novel weighted augmented Jacobian matrix with a variable coefficient (WAJM-VC) method for kinematics mapping is proposed. The Jacobian matrix is augmented to achieve a global similarity of human-robot motion. A clamping weighted least norm scheme is introduced to achieve local optimizations, and the operating ratio coefficient is variable to pursue similarity in the elbow joint. Similarity in Cartesian space and the property of joint constraint satisfaction is analysed to determine the damping factor and clamping velocity. Finally, a teleoperation system based on human motion capture is established, and the experimental results indicate that the proposed WAJM-VC method can improve the flexibility and intuition of space teleoperation to complete complex space tasks.

  12. A new effective operator for the hybrid algorithm for solving global optimisation problems

    NASA Astrophysics Data System (ADS)

    Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac

    2018-04-01

    Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.

  13. ICPP: Approach for Understanding Complexity of Plasma

    NASA Astrophysics Data System (ADS)

    Sato, Tetsuya

    2000-10-01

    In this talk I wish to present an IT system that could promote Science of Complexity. In order to deal with a seemingly `complex' phenomenon, which means `beyond analytical manipulation', computer simulation is a viable powerful tool. However, complexity implies a concept beyond the horizon of reductionism. Therefore, rather than simply solving a complex phenomenon for a given boundary condition, one must establish an intelligent way of attacking mutual evolution of a system and its environment. NIFS-TCSC has been developing a prototype system that consists of supercomputers, virtual reality devices and high-speed network system. Let us explain this by picking up a global atmospheric circulation group, global oceanic circulation group and local weather prediction group. Local weather prediction group predicts the local change of the weather such as the creation of cloud and rain in the near future under the global conditions obtained by the global atmospheric and ocean groups. The global groups run simulations by modifying the local heat source/sink evaluated by the local weather prediction and then obtain the global conditions in the next time step. By repeating such a feedback performance one can predict the mutual evolution of the local system and its environment. Mutual information exchanges among multiple groups are carried out instantaneously by the networked common virtual reality space in which 3-D global and local images of the atmospheric and oceanic circulation and the cloud and rain maps are arbitrarily manipulated by any of the groups and commonly viewed. The present networking system has a great advantage that any simulation groups can freely and arbitrarily change their alignment, so that mutual evolution of any stratum system can become tractable by utilizing this network system.

  14. GALAXY: A new hybrid MOEA for the optimal design of Water Distribution Systems

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Savić, D. A.; Kapelan, Z.

    2017-03-01

    A new hybrid optimizer, called genetically adaptive leaping algorithm for approximation and diversity (GALAXY), is proposed for dealing with the discrete, combinatorial, multiobjective design of Water Distribution Systems (WDSs), which is NP-hard and computationally intensive. The merit of GALAXY is its ability to alleviate to a great extent the parameterization issue and the high computational overhead. It follows the generational framework of Multiobjective Evolutionary Algorithms (MOEAs) and includes six search operators and several important strategies. These operators are selected based on their leaping ability in the objective space from the global and local search perspectives. These strategies steer the optimization and balance the exploration and exploitation aspects simultaneously. A highlighted feature of GALAXY lies in the fact that it eliminates majority of parameters, thus being robust and easy-to-use. The comparative studies between GALAXY and three representative MOEAs on five benchmark WDS design problems confirm its competitiveness. GALAXY can identify better converged and distributed boundary solutions efficiently and consistently, indicating a much more balanced capability between the global and local search. Moreover, its advantages over other MOEAs become more substantial as the complexity of the design problem increases.

  15. Generating Multi-Destination Maps.

    PubMed

    Zhang, Junsong; Fan, Jiepeng; Luo, Zhenshan

    2017-08-01

    Multi-destination maps are a kind of navigation maps aimed to guide visitors to multiple destinations within a region, which can be of great help to urban visitors. However, they have not been developed in the current online map service. To address this issue, we introduce a novel layout model designed especially for generating multi-destination maps, which considers the global and local layout of a multi-destination map. We model the layout problem as a graph drawing that satisfies a set of hard and soft constraints. In the global layout phase, we balance the scale factor between ROIs. In the local layout phase, we make all edges have good visibility and optimize the map layout to preserve the relative length and angle of roads. We also propose a perturbation-based optimization method to find an optimal layout in the complex solution space. The multi-destination maps generated by our system are potential feasible on the modern mobile devices and our result can show an overview and a detail view of the whole map at the same time. In addition, we perform a user study to evaluate the effectiveness of our method, and the results prove that the multi-destination maps achieve our goals well.

  16. Cross-scale coupling studies with the Heliophysics/Geospace System Observatory: THEMIS/ARTEMIS contributions

    NASA Astrophysics Data System (ADS)

    Angelopoulos, V.; Hietala, H.; Liu, Z.; Mende, S. B.; Phan, T.; Nishimura, T.; Strangeway, R. J.; Burch, J. L.; Moore, T. E.; Giles, B. L.

    2015-12-01

    The recent launch of MMS, the impending launch of ERG, the continued availability of space (NASA, NOAA, International) and ground based assets (THEMIS GBOs, TREx, SuperDARN) enable a comprehensive study of global drivers of (and responses to) kinetic processes at the magnetopause, the magnetotail, the inner magnetosphere and the ionosphere. Previously unresolved questions related to the nature of the modes of magnetospheric convection (pseudobreakups, substorms, SMCs and storms) can now be addressed simultaneously at a kinetic level (with multi-spacecraft missions) and at a global level (with the emerging, powerful H/GSO). THEMIS has been tasked to perform orbital changes that will optimize the observatory, and simultaneously place its probes, along with MMS's, at the heart of where critical kinetic processes occur, near sites of magnetic reconnection and magnetic energy conversion, and in optimal view of ground based assets. I will discuss these unique alignments of the H/GSO fleet that can reveal how cross-scale coupling is manifest, allowing us to view old paradigms in a new light.

  17. Using Combustion Tracers to Estimate Surface Black Carbon Distributions in WRF-Chem

    NASA Astrophysics Data System (ADS)

    Raman, A.; Arellano, A. F.

    2015-12-01

    Black Carbon (BC) emissions significantly affect the global and regional climate, air quality, and human health. However, BC observations are currently limited in space and time; leading to considerable uncertainties in the estimates of BC distribution from regional and global models. Here, we investigate the usefulness of carbon monoxide (CO) in quantifying BC across continental United States (CONUS). We use high resolution EPA AQS observations of CO and IMPROVE BC to estimate BC/CO ratios. We model the BC and CO distribution using the community Weather Research and Forecasting model with Chemistry (WRF-Chem). We configured WRF-Chem using MOZART chemistry, NEI 2005, MEGAN, and FINNv1.5 for anthropogenic, biogenic and fire emissions, respectively. In this work, we address the following three key questions: 1) What are the discrepancies in the estimates of BC and CO distributions across CONUS during summer and winter periods?, 2) How do BC/CO ratios change for different spatial and temporal regimes?, 3) Can we get better estimates of BC from WRF-Chem if we use BC/CO ratios along with optimizing CO concentrations? We compare ratios derived from the model and observations and develop characteristic ratios for several geographical and temporal regimes. We use an independent set of measurements of BC and CO to evaluate these ratios. Finally, we use a Bayesian synthesis inversion to optimize CO from WRF-Chem using regionally tagged CO tracers. We multiply the characteristic ratios we derived with the optimized CO to obtain BC distributions. Our initial results suggest that the maximum ratios of BC versus CO occur in the western US during the summer (average: 4 ng/m3/ppbv) and in the southeast during the winter (average: 5 ng/m3/ppbv). However, we find that these relationships vary in space and time and are highly dependent on fuel usage and meteorology. We find that optimizing CO using EPA-AQS provides improvements in BC but only over areas where BC/CO ratios are close to observed values.Black Carbon (BC) emissions significantly affect the global and regional climate, air quality, and human health. However, BC observations are currently limited in space and time; leading to considerable uncertainties in the estimates of BC distribution from regional and global models. Here, we investigate the usefulness of carbon monoxide (CO) in quantifying BC across continental United States (CONUS). We use high resolution EPA AQS observations of CO and IMPROVE BC to estimate BC/CO ratios. We model the BC and CO distribution using the community Weather Research and Forecasting model with Chemistry (WRF-Chem). We configured WRF-Chem using MOZART chemistry, NEI 2005, MEGAN, and FINNv1.5 for anthropogenic, biogenic and fire emissions, respectively. In this work, we address the following three key questions: 1) What are the discrepancies in the estimates of BC and CO distributions across CONUS during summer and winter periods?, 2) How do BC/CO ratios change for different spatial and temporal regimes?, 3) Can we get better estimates of BC from WRF-Chem if we use BC/CO ratios along with optimizing CO concentrations? We compare ratios derived from the model and observations and develop characteristic ratios for several geographical and temporal regimes. We use an independent set of measurements of BC and CO to evaluate these ratios. Finally, we use a Bayesian synthesis inversion to optimize CO from WRF-Chem using regionally tagged CO tracers. We multiply the characteristic ratios we derived with the optimized CO to obtain BC distributions. Our initial results suggest that the maximum ratios of BC versus CO occur in the western US during the summer (average: 4 ng/m3/ppbv) and in the southeast during the winter (average: 5 ng/m3/ppbv). However, we find that these relationships vary in space and time and are highly dependent on fuel usage and meteorology. We find that optimizing CO using EPA-AQS provides improvements in BC but only over areas where BC/CO ratios are close to observed values.

  18. Potential applications of skip SMV with thrust engine

    NASA Astrophysics Data System (ADS)

    Wang, Weilin; Savvaris, Al

    2016-11-01

    This paper investigates the potential applications of Space Maneuver Vehicles (SMV) with skip trajectory. Due to soaring space operations over the past decades, the risk of space debris has considerably increased such as collision risks with space asset, human property on ground and even aviation. Many active debris removal methods have been investigated and in this paper, a debris remediation method is first proposed based on skip SMV. The key point is to perform controlled re-entry. These vehicles are expected to achieve a trans-atmospheric maneuver with thrust engine. If debris is released at altitude below 80 km, debris could be captured by the atmosphere drag force and re-entry interface prediction accuracy is improved. Moreover if the debris is released in a cargo at a much lower altitude, this technique protects high value space asset from break up by the atmosphere and improves landing accuracy. To demonstrate the feasibility of this concept, the present paper presents the simulation results for two specific mission profiles: (1) descent to predetermined altitude; (2) descent to predetermined point (altitude, longitude and latitude). The evolutionary collocation method is adopted for skip trajectory optimization due to its global optimality and high-accuracy. This method is actually a two-step optimization approach based on the heuristic algorithm and the collocation method. The optimal-control problem is transformed into a nonlinear programming problem (NLP) which can be efficiently and accurately solved by the sequential quadratic programming (SQP) procedure. However, such a method is sensitive to initial values. To reduce the sensitivity problem, genetic algorithm (GA) is adopted to refine the grids and provide near optimum initial values. By comparing the simulation data from different scenarios, it is found that skip SMV is feasible in active debris removal and the evolutionary collocation method gives a truthful re-entry trajectory that satisfies the path and boundary constraints.

  19. Detail and gestalt focus in individuals with optimal outcomes from Autism Spectrum Disorders

    PubMed Central

    Fitch, Allison; Fein, Deborah A.; Eigsti, Inge-Marie

    2015-01-01

    Individuals with high-functioning autism (HFA) have a cognitive style that privileges local over global or gestalt details. While not a core symptom of autism, individuals with HFA seem to reliably show this bias. Our lab has been studying a sample of children who have overcome their early ASD diagnoses, showing “optimal outcomes” (OO). This study characterizes performance by OO, HFA, and typically developing (TD) adolescents as they describe paintings under cognitive load. Analyses of detail focus in painting descriptions indicated that the HFA group displayed significantly more local focus than both OO and TD groups, while the OO and TD groups did not differ. We discuss implications for the centrality of detail focus to the autism diagnosis. PMID:25563455

  20. Detail and gestalt focus in individuals with optimal outcomes from autism spectrum disorders.

    PubMed

    Fitch, Allison; Fein, Deborah A; Eigsti, Inge-Marie

    2015-06-01

    Individuals with high-functioning autism (HFA) have a cognitive style that privileges local over global or gestalt details. While not a core symptom of autism, individuals with HFA seem to reliably show this bias. Our lab has been studying a sample of children who have overcome their early ASD diagnoses, showing "optimal outcomes" (OO). This study characterizes performance by OO, HFA, and typically developing (TD) adolescents as they describe paintings under cognitive load. Analyses of detail focus in painting descriptions indicated that the HFA group displayed significantly more local focus than both OO and TD groups, while the OO and TD groups did not differ. We discuss implications for the centrality of detail focus to the autism diagnosis.

  1. OAST Space Theme Workshop. Volume 3: Working group summary. 6: Power (P-2). A. Statement. B. Technology needs (form 1). C. Priority assessment (form 2)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Power requirements for the multipurpose space power platform, for space industrialization, SETI, the solar system exploration facility, and for global services are assessed for various launch dates. Priorities and initiatives for the development of elements of space power systems are described for systems using light power input (solar energy source) or thermal power input, (solar, chemical, nuclear, radioisotopes, reactors). Systems for power conversion, power processing, distribution and control are likewise examined.

  2. Metric Optimization for Surface Analysis in the Laplace-Beltrami Embedding Space

    PubMed Central

    Lai, Rongjie; Wang, Danny J.J.; Pelletier, Daniel; Mohr, David; Sicotte, Nancy; Toga, Arthur W.

    2014-01-01

    In this paper we present a novel approach for the intrinsic mapping of anatomical surfaces and its application in brain mapping research. Using the Laplace-Beltrami eigen-system, we represent each surface with an isometry invariant embedding in a high dimensional space. The key idea in our system is that we realize surface deformation in the embedding space via the iterative optimization of a conformal metric without explicitly perturbing the surface or its embedding. By minimizing a distance measure in the embedding space with metric optimization, our method generates a conformal map directly between surfaces with highly uniform metric distortion and the ability of aligning salient geometric features. Besides pairwise surface maps, we also extend the metric optimization approach for group-wise atlas construction and multi-atlas cortical label fusion. In experimental results, we demonstrate the robustness and generality of our method by applying it to map both cortical and hippocampal surfaces in population studies. For cortical labeling, our method achieves excellent performance in a cross-validation experiment with 40 manually labeled surfaces, and successfully models localized brain development in a pediatric study of 80 subjects. For hippocampal mapping, our method produces much more significant results than two popular tools on a multiple sclerosis study of 109 subjects. PMID:24686245

  3. Optimal design of wind barriers using 3D computational fluid dynamics simulations

    NASA Astrophysics Data System (ADS)

    Fang, H.; Wu, X.; Yang, X.

    2017-12-01

    Desertification is a significant global environmental and ecological problem that requires human-regulated control and management. Wind barriers are commonly used to reduce wind velocity or trap drifting sand in arid or semi-arid areas. Therefore, optimal design of wind barriers becomes critical in Aeolian engineering. In the current study, we perform 3D computational fluid dynamics (CFD) simulations for flow passing through wind barriers with different structural parameters. To validate the simulation results, we first inter-compare the simulated flow field results with those from both wind-tunnel experiments and field measurements. Quantitative analyses of the shelter effect are then conducted based on a series of simulations with different structural parameters (such as wind barrier porosity, row numbers, inter-row spacing and belt schemes). The results show that wind barriers with porosity of 0.35 could provide the longest shelter distance (i.e., where the wind velocity reduction is more than 50%) thus are recommended in engineering designs. To determine the optimal row number and belt scheme, we introduce a cost function that takes both wind-velocity reduction effects and economical expense into account. The calculated cost function show that a 3-row-belt scheme with inter-row spacing of 6h (h as the height of wind barriers) and inter-belt spacing of 12h is the most effective.

  4. Examining the teaching roles and experiences of non-physician health care providers in family medicine education: a qualitative study.

    PubMed

    Beber, Serena; Antao, Viola; Telner, Deanna; Krueger, Paul; Peranson, Judith; Meaney, Christopher; Meindl, Maria; Webster, Fiona

    2015-02-13

    Primary Care reform in Canada and globally has encouraged the development of interprofessional primary care initiatives. This has led to significant involvement of non-physician Health Care Providers (NPHCPs) in the teaching of medical trainees. The objective of this study was to understand the experiences, supports and challenges facing non-physician health care providers in Family Medicine education. Four focus groups were conducted using a semi-structured interview guide with twenty one NPHCPs involved in teaching at the University of Toronto, Department of Family & Community Medicine. The focus groups were transcribed and analyzed for recurrent themes. The multi-disciplinary research team held several meetings to discuss themes. NPHCPs were highly involved in Family Medicine education, formally and informally. NPHCPs felt valued as teachers, but this often did not occur until after learners understood their educator role through increased time and exposure. NPHCPs expressed a lack of advance information of learner knowledge level and expectations, and missed opportunities to give feedback or receive teaching evaluations. Adequate preparation time, teaching space and financial compensation were important to NPHCPs, yet were often lacking. There was low awareness but high interest in faculty status and professional development opportunities. Sharing learner goals and objectives and offering NPHCPs feedback and evaluation would help to formalize NPHCP roles and optimize their capacity for cross-professional teaching. Preparation time and dedicated space for teaching are also necessary. NPHCPs should be encouraged to pursue faculty appointments and to access ongoing Professional Development opportunities.

  5. A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Grunberg, D. B.

    1986-01-01

    A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.

  6. Fizeau interferometry from space: a challenging frontier in global astrometry

    NASA Astrophysics Data System (ADS)

    Loreggia, Davide; Gardiol, Daniele; Gai, Mario; Lattanzi, Mario G.; Busonero, Deborah

    2004-10-01

    The design and performance of a Fizeau interferometer with long focal length and large field of view are discussed. The optical scheme presented is well suited for very accurate astrometric measurements from space, being optimised, in terms of geometry and aberrations, to observe astronomical targets down to the visual magnitude mV=20, with a measurement accuracy of 10 microarcseconds at mV=15. This study is in the context of the next generation astrometric space missions, in particular for a mission profile similar to that of the Gaia mission of the European Space Agency. Beyond the accuracy goal, the great effort in optical aberrations reduction, particularly distortion, aims at the optimal exploitation of data acquisition done with CCD arrays working in Time Delay Integration mode. The design solution we present reaches the astrometric goals with a field of view of 0.5 square degrees.

  7. Space manufacturing III; Proceedings of the Fourth Conference, Princeton University, Princeton, N.J., May 14-17, 1979

    NASA Technical Reports Server (NTRS)

    Grey, J. (Editor); Krop, C.

    1979-01-01

    Papers are presented on the various technological, political, economic, environmental and social aspects of large manufacturing facilities in space. Specific topics include the potential global market for satellite solar power stations in 2025, the electrostatic separation of lunar soil, methods for extraterrestrial materials processing, the socio-political status of efforts toward the development of space manufacturing facilities, the financing of space industrialization, the optimization of space manufacturing systems, the design and project status of Mass Driver Two, and the use of laser-boosted lighter-than-air-vehicles as heavy-lift launch vehicles. Attention is also given to systems integration in the development of controlled ecological life support systems, the design of a space manufacturing facility to use lunar materials, high performance solar sails, the environmental effects of the satellite power system reference design, the guidance, trajectory and capture of lunar materials ejected from the moon by mass driver, the relative design merits of zero-gravity and one-gravity space environments, consciousness alteration in space and the prospecting and retrieval of asteroids.

  8. Conceptual Research of Lunar-based Earth Observation for Polar Glacier Motion

    NASA Astrophysics Data System (ADS)

    Ruan, Zhixing; Liu, Guang; Ding, Yixing

    2016-07-01

    The ice flow velocity of glaciers is important for estimating the polar ice sheet mass balance, and it is of great significance for studies into rising sea level under the background of global warming. However so far the long-term and global measurements of these macro-scale motion processes of the polar glaciers have hardly been achieved by Earth Observation (EO) technique from the ground, aircraft or satellites in space. This paper, facing the demand for space technology for large-scale global environmental change observation,especially the changes of polar glaciers, and proposes a new concept involving setting up sensors on the lunar surface and using the Moon as a platform for Earth observation, transmitting the data back to Earth. Lunar-based Earth observation, which enables the Earth's large-scale, continuous, long-term dynamic motions to be measured, is expected to provide a new solution to the problems mentioned above. According to the pattern and characteristics of polar glaciers motion, we will propose a comprehensive investigation of Lunar-based Earth observation with synthetic aperture radar (SAR). Via theoretical modeling and experimental simulation inversion, intensive studies of Lunar-based Earth observation for the glacier motions in the polar regions will be implemented, including the InSAR basics theory, observation modes of InSAR and optimization methods of their key parameters. It will be of a great help to creatively expand the EO technique system from space. In addition, they will contribute to establishing the theoretical foundation for the realization of the global, long-term and continuous observation for the glacier motion phenomena in the Antarctic and the Arctic.

  9. Tensor voting for image correction by global and local intensity alignment.

    PubMed

    Jia, Jiaya; Tang, Chi-Keung

    2005-01-01

    This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.

  10. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    PubMed

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  11. The HYCOM (HYbrid Coordinate Ocean Model) Data Assimilative System

    DTIC Science & Technology

    2007-06-01

    Systems Inc., Stennis Space Center. MS, USA d SHOM/CMO, Toulouse. France € Los Alamos National Laboratory, Los Alamos, NM. USA Received 1 October 2004...Global Ocean Data Assimilation ’U. of Miami, NRL, Los Alamos, NOAA/NCEP, NOAA/AOML, Experiment (GODAE). GODAE is a coordinated inter- NOAA/PMEL, PSI...of Miami, the Naval all three approaches and the optimal distribution is Research Laboratory (NRL), and the Los Alamos chosen at every time step. The

  12. Simulation-Based Methodologies for Global Optimization and Planning

    DTIC Science & Technology

    2013-10-11

    GESK ) Stochastic kriging (SK) was introduced by Ankenman, Nelson, and Staum [1] to handle the stochastic simulation setting, where the noise in the...is used for the kriging. Four experiments will be used to illustrate some charac- teristics of SK, SKG, and GESK , with respect to the choice of...samples at each point. Because GESK is able to explore the design space more via extrapolation, it does a better job of capturing the fluctuations of the

  13. BRIC-17 Mapping Spaceflight-Induced Hypoxic Signaling and Response in Plants

    NASA Technical Reports Server (NTRS)

    Gilroy, Simon; Choi, Won-Gyu; Swanson, Sarah

    2012-01-01

    Goals of this work are: (1) Define global changes in gene expression patterns in Arabidopsis plants grown in microgravity using whole genome microarrays (2) Compare to mutants resistant to low oxygen challenge using whole genome microarrays Also measuring root and shoot size Outcomes from this research are: (1) Provide fundamental information on plant responses to the stresses inherent in spaceflight (2) Potential for informing on genetic strategies to engineer plants for optimal growth in space

  14. An Angular Method with Position Control for Block Mesh Squareness Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, J.; Stillman, D.

    We optimize a target function de ned by angular properties with a position control term for a basic stencil with a block-structured mesh, to improve element squareness in 2D and 3D. Comparison with the condition number method shows that besides a similar mesh quality regarding orthogonality can be achieved as the former does, the new method converges faster and provides a more uniform global mesh spacing in our numerical tests.

  15. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.

  16. GMI Instrument Spin Balance Method, Optimization, Calibration, and Test

    NASA Technical Reports Server (NTRS)

    Ayari, Laoucet; Kubitschek, Michael; Ashton, Gunnar; Johnston, Steve; Debevec, Dave; Newell, David; Pellicciotti, Joseph

    2014-01-01

    The Global Microwave Imager (GMI) instrument must spin at a constant rate of 32 rpm continuously for the 3 year mission life. Therefore, GMI must be very precisely balanced about the spin axis and CG to maintain stable scan pointing and to minimize disturbances imparted to the spacecraft and attitude control on-orbit. The GMI instrument is part of the core Global Precipitation Measurement (GPM) spacecraft and is used to make calibrated radiometric measurements at multiple microwave frequencies and polarizations. The GPM mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and frequent precipitation measurements. Ball Aerospace and Technologies Corporation (BATC) was selected by NASA Goddard Space Flight Center to design, build, and test the GMI instrument. The GMI design has to meet a challenging set of spin balance requirements and had to be brought into simultaneous static and dynamic spin balance after the entire instrument was already assembled and before environmental tests began. The focus of this contribution is on the analytical and test activities undertaken to meet the challenging spin balance requirements of the GMI instrument. The novel process of measuring the residual static and dynamic imbalances with a very high level of accuracy and precision is presented together with the prediction of the optimal balance masses and their locations.

  17. GMI Instrument Spin Balance Method, Optimization, Calibration and Test

    NASA Technical Reports Server (NTRS)

    Ayari, Laoucet; Kubitschek, Michael; Ashton, Gunnar; Johnston, Steve; Debevec, Dave; Newell, David; Pellicciotti, Joseph

    2014-01-01

    The Global Microwave Imager (GMI) instrument must spin at a constant rate of 32 rpm continuously for the 3-year mission life. Therefore, GMI must be very precisely balanced about the spin axis and center of gravity (CG) to maintain stable scan pointing and to minimize disturbances imparted to the spacecraft and attitude control on-orbit. The GMI instrument is part of the core Global Precipitation Measurement (GPM) spacecraft and is used to make calibrated radiometric measurements at multiple microwave frequencies and polarizations. The GPM mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and frequent precipitation measurements. Ball Aerospace and Technologies Corporation (BATC) was selected by NASA Goddard Space Flight Center to design, build, and test the GMI instrument. The GMI design has to meet a challenging set of spin balance requirements and had to be brought into simultaneous static and dynamic spin balance after the entire instrument was already assembled and before environmental tests began. The focus of this contribution is on the analytical and test activities undertaken to meet the challenging spin balance requirements of the GMI instrument. The novel process of measuring the residual static and dynamic imbalances with a very high level of accuracy and precision is presented together with the prediction of the optimal balance masses and their locations.

  18. Developing the "Lunar Vicinity" Scenario of the Global Exploration Roadmap

    NASA Astrophysics Data System (ADS)

    Schmidt, G.; Neal, C. R.; Crawford, I. A.; Ehrenfreund, P.

    2014-04-01

    The Global Exploration Roadmap (GER, [1]) has been developed by the International Space Exploration Coordination Group (ISECG - comprised of 14 space agencies) to define various pathways to getting humans beyond low Earth orbit and eventually to Mars. Such pathways include visiting asteroids or the Moon before going on to Mars. This document has been written at a very high level and many details are still to be determined. However, a number of important papers regarding international space exploration can form a basis for this document (e.g. [2,3]). In this presentation, we focus on developing the "Lunar Vicinity" scenario by adding detail via mapping a number of recent reports/documents into the GER. Precedence for this scenario is given by Szajnfarber et al. [4] who stated "We find that when international partners are considered endogenously, the argument for a "flexible path" approach is weakened substantially. This is because international contributions can make "Moon first" economically feasible". The documents highlighted here are in no way meant to be all encompassing and other documents can and should be added, (e.g., the JAXA Space Exploration Roadmap). This exercise is intended to demonstrate that existing documents can be mapped into the GER despite the major differences in granularity, and that this mapping is a way to promote broader national and international buy-in to the Lunar Vicinity scenario. The documents used here are: the Committee on Space Research (COSPAR) Panel on Exploration report on developing a global space exploration program [5], the Strategic Knowledge Gaps (SKGs) report from the Lunar Exploration Analysis Group (LEAG) [6], the Lunar Exploration Roadmap developed by LEAG [7], the National Research Council report Scientific Context for the Exploration of the Moon (SCEM) [8], the scientific rationale for resuming lunar surface exploration [9], the astrobiological benefits of human space exploration [9,10].

  19. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    PubMed

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system throughput performance.

  20. The ISECG Science White Paper - A Scientific Perspective on the Global Exploration Roadmap

    NASA Astrophysics Data System (ADS)

    Bussey, David B.; Worms, Jean-Claude; Spiero, Francois; Schlutz, Juergen; Ehrenfreund, Pascale

    2016-07-01

    Future space exploration goals call for sending humans and robots beyond low Earth orbit and establishing sustained access to destinations such as the Moon, asteroids and Mars. Space agencies participating in the International Space Exploration Coordination Group (ISECG) are discussing an international approach for achieving these goals, documented in ISECG's Global Exploration Roadmap (GER). The GER reference scenario reflects a step-wise evolution of critical capabilities from ISS to missions in the lunar vicinity in preparation for the journey of humans to Mars. As an element of this continued road mapping effort, the ISECG agencies are therefore soliciting input and coordinated discussion with the scientific community to better articulate and promote the scientific opportunities of the proposed mission themes. An improved understanding of the scientific drivers and the requirements to address priority science questions associated with the exploration destinations (Moon, Near Earth Asteroids, Mars and its moons) as well as the preparatory activities in cis-lunar space is beneficial to optimize the partnership of robotic assets and human presence beyond low Earth orbit. The interaction has resulted in the development of a Science White Paper to: • Identify and highlight the scientific opportunities in early exploration missions as the GER reference architecture matures, • Communicate overarching science themes and their relevance in the GER destinations, • Ensure international science communities' perspectives inform the future evolution of mission concepts considered in the GER The paper aims to capture the opportunities offered by the missions in the GER for a broad range of scientific disciplines. These include planetary and space sciences, astrobiology, life sciences, physical sciences, astronomy and Earth science. The paper is structured around grand science themes that draw together and connect research in the various disciplines, and it will focus on opportunities created by the near-term mission themes in the GER centred around 1) extended duration crew missions to an exploration habitat in cis-lunar space, 2) crew mission(s) to an asteroid, and 3) crew missions to the lunar surface. The preparation of that Science White Paper has been coordinated and led by an external Science Advisory Group composed of scientists form a variety of nations. The first draft of this White Paper has been discussed on the occasion of a COSPAR-ISECG-ESF workshop organised in Paris on 10-11 February 2016. The recommendations developed at the workshop provide further input that is incorporated in the final version of the ISECG Science White Paper, expected to be published in the fall of 2016. The authors aim to present the rationale and contents of this White Paper on the occasion of the COSPAR Assembly.

  1. UCLA IGPP Space Plasma Simulation Group

    NASA Technical Reports Server (NTRS)

    1998-01-01

    During the past 10 years the UCLA IGPP Space Plasma Simulation Group has pursued its theoretical effort to develop a Mission Oriented Theory (MOT) for the International Solar Terrestrial Physics (ISTP) program. This effort has been based on a combination of approaches: analytical theory, large scale kinetic (LSK) calculations, global magnetohydrodynamic (MHD) simulations and self-consistent plasma kinetic (SCK) simulations. These models have been used to formulate a global interpretation of local measurements made by the ISTP spacecraft. The regions of applications of the MOT cover most of the magnetosphere: the solar wind, the low- and high-latitude magnetospheric boundary, the near-Earth and distant magnetotail, and the auroral region. Most recent investigations include: plasma processes in the electron foreshock, response of the magnetospheric cusp, particle entry in the magnetosphere, sources of observed distribution functions in the magnetotail, transport of oxygen ions, self-consistent evolution of the magnetotail, substorm studies, effects of explosive reconnection, and auroral acceleration simulations.

  2. 'Lending the space': midwives' perceptions of birth space and clinical risk management.

    PubMed

    Seibold, Carmel; Seibold, Camel; Licqurish, Sharon; Rolls, Colleen; Hopkins, Finbar

    2010-10-01

    to explore and describe midwives perceptions of birth space and clinical risk management and their impact on practice both before and after a move to a new facility. an exploratory descriptive study utilising a modified participatory approach and observation and focus groups for data collection. a major metropolitan maternity hospital in Victoria, Australia. 18 midwives, including graduate year midwives, caseload midwives and hospital midwives working normal shifts, employed within a hospital. the major themes identified were perceptions of birth space, perceptions of risk management, influence of birth space and risk management on practice and moving but not changing: geographical space and practice. Midwives desire to create the ideal birth space was hampered by a prevailing biomedical discourse which emphasised risk. Midwives in all three groups saw themselves as the gatekeepers, 'holding the space' or 'providing a bridge' for women, often in the face of a hierarchical hospital structure with obstetricians governing practice. This situation did not differ significantly after the relocation to the new hospital. Despite a warmer, more spacious and private birth space midwives felt the care was still influenced by the old hierarchical hospital culture. Caseload midwives felt they had the best opportunity to make a difference to women's experience because they were able, through continuity of care, to build trusting relationships with women during the antenatal period. although the physical environment can make a marginal contribution to an optimal birth space, it has little effect on clinical risk management practices within a major public hospital and the way in which this impacts midwives' practice. The importance of place and people are the key to providing an optimal birth space, as are women centred midwifery models of care and reasonable workloads. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Simultaneous multislice refocusing via time optimal control.

    PubMed

    Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf

    2018-02-09

    Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.

  4. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  5. A characterization of the coupled evolution of grain fabric and pore space using complex networks: Pore connectivity and optimized flows in the presence of shear bands

    NASA Astrophysics Data System (ADS)

    Russell, Scott; Walker, David M.; Tordesillas, Antoinette

    2016-03-01

    A framework for the multiscale characterization of the coupled evolution of the solid grain fabric and its associated pore space in dense granular media is developed. In this framework, a pseudo-dual graph transformation of the grain contact network produces a graph of pores which can be readily interpreted as a pore space network. Survivability, a new metric succinctly summarizing the connectivity of the solid grain and pore space networks, measures material robustness. The size distribution and the connectivity of pores can be characterized quantitatively through various network properties. Assortativity characterizes the pore space with respect to the parity of the number of particles enclosing the pore. Multiscale clusters of odd parity versus even parity contact cycles alternate spatially along the shear band: these represent, respectively, local jamming and unjamming regions that continually switch positions in time throughout the failure regime. Optimal paths, established using network shortest paths in favor of large pores, provide clues on preferential paths for interstitial matter transport. In systems with higher rolling resistance at contacts, less tortuous shortest paths thread through larger pores in shear bands. Notably the structural patterns uncovered in the pore space suggest that more robust models of interstitial pore flow through deforming granular systems require a proper consideration of the evolution of in situ shear band and fracture patterns - not just globally, but also inside these localized failure zones.

  6. The space shuttle payload planning working groups. Volume 8: Earth and ocean physics

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The findings and recommendations of the Earth and Ocean Physics working group of the space shuttle payload planning activity are presented. The requirements for the space shuttle mission are defined as: (1) precision measurement for earth and ocean physics experiments, (2) development and demonstration of new and improved sensors and analytical techniques, (3) acquisition of surface truth data for evaluation of new measurement techniques, (4) conduct of critical experiments to validate geophysical phenomena and instrumental results, and (5) development and validation of analytical/experimental models for global ocean dynamics and solid earth dynamics/earthquake prediction. Tables of data are presented to show the flight schedule estimated costs, and the mission model.

  7. Design of high-strength refractory complex solid-solution alloys

    DOE PAGES

    Singh, Prashant; Sharma, Aayush; Smirnov, A. V.; ...

    2018-03-28

    Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less

  8. Design of high-strength refractory complex solid-solution alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Prashant; Sharma, Aayush; Smirnov, A. V.

    Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less

  9. Statistically optimal analysis of state-discretized trajectory data from multiple thermodynamic states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hao; Mey, Antonia S. J. S.; Noé, Frank

    2014-12-07

    We propose a discrete transition-based reweighting analysis method (dTRAM) for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) dTRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), dTRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, dTRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitablemore » conditions, these MSMs can be used to calculate kinetic quantities (e.g., rates, timescales). In the limit of a single thermodynamic state, dTRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.« less

  10. Application of artificial intelligence to search ground-state geometry of clusters

    NASA Astrophysics Data System (ADS)

    Lemes, Maurício Ruv; Marim, L. R.; dal Pino, A.

    2002-08-01

    We introduce a global optimization procedure, the neural-assisted genetic algorithm (NAGA). It combines the power of an artificial neural network (ANN) with the versatility of the genetic algorithm. This method is suitable to solve optimization problems that depend on some kind of heuristics to limit the search space. If a reasonable amount of data is available, the ANN can ``understand'' the problem and provide the genetic algorithm with a selected population of elements that will speed up the search for the optimum solution. We tested the method in a search for the ground-state geometry of silicon clusters. We trained the ANN with information about the geometry and energetics of small silicon clusters. Next, the ANN learned how to restrict the configurational space for larger silicon clusters. For Si10 and Si20, we noticed that the NAGA is at least three times faster than the ``pure'' genetic algorithm. As the size of the cluster increases, it is expected that the gain in terms of time will increase as well.

  11. Report of the solar physics panel

    NASA Technical Reports Server (NTRS)

    Withbroe, George L.; Fisher, Richard R.; Antiochos, Spiro; Brueckner, Guenter; Hoeksema, J. Todd; Hudson, Hugh; Moore, Ronald; Radick, Richard R.; Rottman, Gary; Scherrer, Philip

    1991-01-01

    Recent accomplishments in solar physics can be grouped by the three regions of the Sun: the solar interior, the surface, and the exterior. The future scientific problems and areas of interest involve: generation of magnetic activity cycle, energy storage and release, solar activity, solar wind and solar interaction. Finally, the report discusses a number of future space mission concepts including: High Energy Solar Physics Mission, Global Solar Mission, Space Exploration Initiative, Solar Probe Mission, Solar Variability Explorer, Janus, as well as solar physics on Space Station Freedom.

  12. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  13. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data

    PubMed Central

    2017-01-01

    In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718

  14. Meeting Report: Long Term Monitoring of Global Vegetation using Moderate Resolution Satellites

    NASA Technical Reports Server (NTRS)

    Morisette, Jeffrey; Heinsch, Fath Ann; Running, Steven W.

    2006-01-01

    The international community has long recognized the need to coordinate observations of Earth from space. In 1984, this situation provided the impetus for creating the Committee on Earth Observation Satellites (CEOS), an international coordinating mechanism charged with coordinating international civil spaceborne missions designed to observe and study planet Earth. Within CEOS, its Working Group on Calibration and Validation (WGCV) is tasked with coordinating satellite-based global observations of vegetation. Currently, several international organizations are focusing on the requirements for Earth observation from space to address key science questions and societal benefits related to our terrestrial environment. The Global Vegetation Workshop, sponsored by the WGCV and held in Missoula, Montana, 7-10 August, 2006, was organized to establish a framework to understand the inter-relationships among multiple, global vegetation products and identify opportunities for: 1) Increasing knowledge through combined products, 2) Realizing efficiency by avoiding redundancy, and 3) Developing near- and long-term plans to avoid gaps in our understanding of critical global vegetation information. The Global Vegetation Workshop brought together 135 researchers from 25 states and 14 countries to advance these themes and formulate recommendations for CEOS members and the Global Earth Observation System of Systems (GEOSS). The eighteen oral presentations and most of the 74 posters presented at the meeting can be downloaded from the meeting website (www.ntsg.umt.edu/VEGMTG/). Meeting attendees were given a copy of the July 2006 IEEE Transactions on Geoscience and Remote Sensing Special Issue on Global Land Product Validation, coordinated by the CEOS Working Group on Calibration and Validation (WGCV). This issue contains 29 articles focusing on validation products from several of the sensors discussed during the workshop.

  15. Image Reconstruction from Highly Undersampled (k, t)-Space Data with Joint Partial Separability and Sparsity Constraints

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Christodoulou, Anthony G.; Liang, Zhi-Pei

    2012-01-01

    Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled (k, t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method. PMID:22695345

  16. The Potential of Spaceborne Remote Sensing for Deriving Canopy Structure Metrics and Informing Biodiversity and Habitat Mapping

    NASA Astrophysics Data System (ADS)

    Goetz, S. J.; Dubayah, R.

    2016-12-01

    Research on characterization of canopy structure with remote sensing has exploded as airborne data sets have become more widely available to the biodiversity science and habitat management communities. While these advances are important in the context of increasing pressure on both habitat and wildlife, airborne data acquisitions are necessarily limited in geographic scope and thus in their general applicability to biome-scale biodiversity research initiatives, including international programs striving to implement the United Nations Convention on Biological Diversity (CBD) and the associated Aichi Biodiversity Targets. The lack of systematic metrics of canopy structure across large geographic domains also makes it difficult to implement the CBD Strategic Plan systematically across nations, as outlined in National Biodiversity Strategies and Action Plans. The Group on Earth Observations, Biodiversity Observation Network (GEO BON) has proposed a set of Essential Biodiversity Variables (EBVs) that could be used as a global-scale basis for biodiversity monitoring, but several of those EBVs are still limited by the availability of data on habitat 3D structure. Those limitations will be overcome in the near future with a suite of satellite missions that will provide an unprecedented level of active remote sensing measurements useful for deriving structure information, including Tandem-X, ICESat-2, BIOMASS and the Global Ecosystem Dynamics Investigation (GEDI). We will provide a brief overview of the rapid advance of measurements of canopy structure and the applications that have evolved in recent years in terms of 3D habitat characterization, species-specific habitat utilization, and the potential of these new space-based measurements. In this talk we will focus primarily on GEDI, a lidar mission to be installed on the International Space Station that is optimized for retrieving 3D canopy structure. GEDI and the other new missions will provide long-desired consistent and systematic information on EBVs from space, and thereby facilitate the implementation of international biodiversity policy objectives.

  17. Enabling Future Science and Human Exploration with NASA's Next Generation Near Earth and Deep Space Communications and Navigation Architecture

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard; Schier, James; Israel, David; Tai, Wallace; Liebrecht, Philip; Townes, Stephen

    2017-01-01

    The National Aeronautics and Space Administration (NASA) is studying alternatives for the United States space communications architecture through the 2040 timeframe. This architecture provides communication and navigation services to both human exploration and science missions throughout the solar system. Several of NASA's key space assets are approaching their end of design life and major systems are in need of replacement. The changes envisioned in the relay satellite architecture and capabilities around both Earth and Mars are significant undertakings and occur only once or twice each generation, and therefore is referred to as NASA's next generation space communications architecture. NASA's next generation architecture will benefit from technology and services developed over recent years. These innovations will provide missions with new operations concepts, increased performance, and new business and operating models. Advancements in optical communications will enable high-speed data channels and the use of new and more complex science instruments. Modern multiple beam/multiple access technologies such as those employed on commercial high throughput satellites will enable enhanced capabilities for on-demand service, and with new protocols will help provide Internet-like connectivity for cooperative spacecraft to improve data return and coordinate joint mission objectives. On-board processing with autonomous and cognitive networking will play larger roles to help manage system complexity. Spacecraft and ground systems will coordinate among themselves to establish communications, negotiate link connectivity, and learn to share spectrum to optimize resource allocation. Spacecraft will autonomously navigate, plan trajectories, and handle off-nominal events. NASA intends to leverage the ever-expanding capabilities of the satellite communications industry and foster its continued growth. NASA's technology development will complement and extend commercial capabilities to meet unique space environment requirements and to provide capabilities that are beyond the commercial marketplace. The progress of the communications industry, including the emerging global space internet segment and its planned constellations of 100's of satellites offer additional opportunities for new capability and mission concepts. The opportunities and challenges of a future space architecture require an optimal solution encompassing a global perspective. The concepts and technologies intentionally define an architecture that applies not only to NASA, but to other U.S. government agencies, international space and government agencies, and domestic and international industries to advance the openness, interoperability, and affordability of space communications. Cooperation among the worlds space agencies, their capabilities, standards, operations, and interoperability are key to advancing humankinds understand of the universe and extending human presence into the solar system.

  18. Enabling Future Science and Human Exploration with NASA's Next Generation near Earth and Deep Space Communications and Navigation Architecture

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Schier, James S.; Israel, David J.; Tai, Wallace; Liebrecht, Philip E.; Townes, Stephen A.

    2017-01-01

    The National Aeronautics and Space Administration (NASA) is studying alternatives for the United States space communications architecture through the 2040 timeframe. This architecture provides communication and navigation services to both human exploration and science missions throughout the solar system. Several of NASA's key space assets are approaching their end of design life and major systems are in need of replacement. The changes envisioned in the relay satellite architecture and capabilities around both Earth and Mars are significant undertakings and occur only once or twice each generation, and therefore is referred to as NASA's next generation space communications architecture. NASA's next generation architecture will benefit from technology and services developed over recent years. These innovations will provide missions with new operations concepts, increased performance, and new business and operating models. Advancements in optical communications will enable high-speed data channels and the use of new and more complex science instruments. Modern multiple beam/multiple access technologies such as those employed on commercial high throughput satellites will enable enhanced capabilities for on-demand service, and with new protocols will help provide Internet-like connectivity for cooperative spacecraft to improve data return and coordinate joint mission objectives. On-board processing with autonomous and cognitive networking will play larger roles to help manage system complexity. Spacecraft and ground systems will coordinate among themselves to establish communications, negotiate link connectivity, and learn to share spectrum to optimize resource allocation. Spacecraft will autonomously navigate, plan trajectories, and handle off-nominal events. NASA intends to leverage the ever-expanding capabilities of the satellite communications industry and foster its continued growth. NASA's technology development will complement and extend commercial capabilities to meet unique space environment requirements and to provide capabilities that are beyond the commercial marketplace. The progress of the communications industry, including the emerging global space internet segment and its planned constellations of 100's of satellites offer additional opportunities for new capability and mission concepts. The opportunities and challenges of a future space architecture require an optimal solution encompassing a global perspective. The concepts and technologies intentionally define an architecture that applies not only to NASA, but to other U.S. government agencies, international space and government agencies, and domestic and international industries to advance the openness, interoperability, and affordability of space communications. Cooperation among the worlds space agencies, their capabilities, standards, operations, and interoperability are key to advancing humankind's understand of the universe and extending human presence into the solar system.

  19. Expedite random structure searching using objects from Wyckoff positions

    NASA Astrophysics Data System (ADS)

    Wang, Shu-Wei; Hsing, Cheng-Rong; Wei, Ching-Ming

    2018-02-01

    Random structure searching has been proved to be a powerful approach to search and find the global minimum and the metastable structures. A true random sampling is in principle needed yet it would be highly time-consuming and/or practically impossible to find the global minimum for the complicated systems in their high-dimensional configuration space. Thus the implementations of reasonable constraints, such as adopting system symmetries to reduce the independent dimension in structural space and/or imposing chemical information to reach and relax into low-energy regions, are the most essential issues in the approach. In this paper, we propose the concept of "object" which is either an atom or composed of a set of atoms (such as molecules or carbonates) carrying a symmetry defined by one of the Wyckoff positions of space group and through this process it allows the searching of global minimum for a complicated system to be confined in a greatly reduced structural space and becomes accessible in practice. We examined several representative materials, including Cd3As2 crystal, solid methanol, high-pressure carbonates (FeCO3), and Si(111)-7 × 7 reconstructed surface, to demonstrate the power and the advantages of using "object" concept in random structure searching.

  20. Global Professional Identity in Deterretorialized Spaces: A Case Study of a Critical Dialogue between Expert and Novice Nonnative English Speaker Teachers (Identidad profesional global en espacios desterritorializados: un estudio de caso de los diálogos críticos entre profesores de inglés no nativos)

    ERIC Educational Resources Information Center

    Guerrero Nieto, Carmen Helena; Meadows, Bryan

    2015-01-01

    This study analyzes the online, peer-peer dialogue between two groups of nonnative English-speaking teachers who are attending graduate programs in Colombia and the United States. Framed by the theoretical concepts of critical pedagogy and global professional identity, a qualitative analysis of the data shows that their expert vs. novice roles…

  1. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.

  2. Global optimization of multicomponent distillation configurations: 2. Enumeration based global minimization algorithm

    DOE PAGES

    Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.; ...

    2016-02-10

    We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less

  3. Suboptimal evolutionary novel environments promote singular altered gravity responses of transcriptome during Drosophila metamorphosis

    PubMed Central

    2013-01-01

    Background Previous experiments have shown that the reduced gravity aboard the International Space Station (ISS) causes important alterations in Drosophila gene expression. These changes were shown to be intimately linked to environmental space-flight related constraints. Results Here, we use an array of different techniques for ground-based simulation of microgravity effects to assess the effect of suboptimal environmental conditions on the gene expression of Drosophila in reduced gravity. A global and integrative analysis, using “gene expression dynamics inspector” (GEDI) self-organizing maps, reveals different degrees in the responses of the transcriptome when using different environmental conditions or microgravity/hypergravity simulation devices. Although the genes that are affected are different in each simulation technique, we find that the same gene ontology groups, including at least one large multigene family related with behavior, stress response or organogenesis, are over represented in each case. Conclusions These results suggest that the transcriptome as a whole can be finely tuned to gravity force. In optimum environmental conditions, the alteration of gravity has only mild effects on gene expression but when environmental conditions are far from optimal, the gene expression must be tuned greatly and effects become more robust, probably linked to the lack of experience of organisms exposed to evolutionary novel environments such as a gravitational free one. PMID:23806134

  4. Trade-offs between lens complexity and real estate utilization in a free-space multichip global interconnection module.

    PubMed

    Milojkovic, Predrag; Christensen, Marc P; Haney, Michael W

    2006-07-01

    The FAST-Net (Free-space Accelerator for Switching Terabit Networks) concept uses an array of wide-field-of-view imaging lenses to realize a high-density shuffle interconnect pattern across an array of smart-pixel integrated circuits. To simplify the optics we evaluated the efficiency gained in replacing spherical surfaces with aspherical surfaces by exploiting the large disparity between narrow vertical cavity surface emitting laser (VCSEL) beams and the wide field of view of the imaging optics. We then analyzed trade-offs between lens complexity and chip real estate utilization and determined that there exists an optimal numerical aperture for VCSELs that maximizes their area density. The results provide a general framework for the design of wide-field-of-view free-space interconnection systems that incorporate high-density VCSEL arrays.

  5. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  6. Ground-to-satellite quantum teleportation.

    PubMed

    Ren, Ji-Gang; Xu, Ping; Yong, Hai-Lin; Zhang, Liang; Liao, Sheng-Kai; Yin, Juan; Liu, Wei-Yue; Cai, Wen-Qi; Yang, Meng; Li, Li; Yang, Kui-Xing; Han, Xuan; Yao, Yong-Qiang; Li, Ji; Wu, Hai-Yan; Wan, Song; Liu, Lei; Liu, Ding-Quan; Kuang, Yao-Wu; He, Zhi-Ping; Shang, Peng; Guo, Cheng; Zheng, Ru-Hua; Tian, Kai; Zhu, Zhen-Cai; Liu, Nai-Le; Lu, Chao-Yang; Shu, Rong; Chen, Yu-Ao; Peng, Cheng-Zhi; Wang, Jian-Yu; Pan, Jian-Wei

    2017-09-07

    An arbitrary unknown quantum state cannot be measured precisely or replicated perfectly. However, quantum teleportation enables unknown quantum states to be transferred reliably from one object to another over long distances, without physical travelling of the object itself. Long-distance teleportation is a fundamental element of protocols such as large-scale quantum networks and distributed quantum computation. But the distances over which transmission was achieved in previous teleportation experiments, which used optical fibres and terrestrial free-space channels, were limited to about 100 kilometres, owing to the photon loss of these channels. To realize a global-scale 'quantum internet' the range of quantum teleportation needs to be greatly extended. A promising way of doing so involves using satellite platforms and space-based links, which can connect two remote points on Earth with greatly reduced channel loss because most of the propagation path of the photons is in empty space. Here we report quantum teleportation of independent single-photon qubits from a ground observatory to a low-Earth-orbit satellite, through an uplink channel, over distances of up to 1,400 kilometres. To optimize the efficiency of the link and to counter the atmospheric turbulence in the uplink, we use a compact ultra-bright source of entangled photons, a narrow beam divergence and high-bandwidth and high-accuracy acquiring, pointing and tracking. We demonstrate successful quantum teleportation of six input states in mutually unbiased bases with an average fidelity of 0.80 ± 0.01, well above the optimal state-estimation fidelity on a single copy of a qubit (the classical limit). Our demonstration of a ground-to-satellite uplink for reliable and ultra-long-distance quantum teleportation is an essential step towards a global-scale quantum internet.

  7. Ground-to-satellite quantum teleportation

    NASA Astrophysics Data System (ADS)

    Ren, Ji-Gang; Xu, Ping; Yong, Hai-Lin; Zhang, Liang; Liao, Sheng-Kai; Yin, Juan; Liu, Wei-Yue; Cai, Wen-Qi; Yang, Meng; Li, Li; Yang, Kui-Xing; Han, Xuan; Yao, Yong-Qiang; Li, Ji; Wu, Hai-Yan; Wan, Song; Liu, Lei; Liu, Ding-Quan; Kuang, Yao-Wu; He, Zhi-Ping; Shang, Peng; Guo, Cheng; Zheng, Ru-Hua; Tian, Kai; Zhu, Zhen-Cai; Liu, Nai-Le; Lu, Chao-Yang; Shu, Rong; Chen, Yu-Ao; Peng, Cheng-Zhi; Wang, Jian-Yu; Pan, Jian-Wei

    2017-09-01

    An arbitrary unknown quantum state cannot be measured precisely or replicated perfectly. However, quantum teleportation enables unknown quantum states to be transferred reliably from one object to another over long distances, without physical travelling of the object itself. Long-distance teleportation is a fundamental element of protocols such as large-scale quantum networks and distributed quantum computation. But the distances over which transmission was achieved in previous teleportation experiments, which used optical fibres and terrestrial free-space channels, were limited to about 100 kilometres, owing to the photon loss of these channels. To realize a global-scale ‘quantum internet’ the range of quantum teleportation needs to be greatly extended. A promising way of doing so involves using satellite platforms and space-based links, which can connect two remote points on Earth with greatly reduced channel loss because most of the propagation path of the photons is in empty space. Here we report quantum teleportation of independent single-photon qubits from a ground observatory to a low-Earth-orbit satellite, through an uplink channel, over distances of up to 1,400 kilometres. To optimize the efficiency of the link and to counter the atmospheric turbulence in the uplink, we use a compact ultra-bright source of entangled photons, a narrow beam divergence and high-bandwidth and high-accuracy acquiring, pointing and tracking. We demonstrate successful quantum teleportation of six input states in mutually unbiased bases with an average fidelity of 0.80 ± 0.01, well above the optimal state-estimation fidelity on a single copy of a qubit (the classical limit). Our demonstration of a ground-to-satellite uplink for reliable and ultra-long-distance quantum teleportation is an essential step towards a global-scale quantum internet.

  8. Development and Application of Optimization Techniques for Composite Laminates.

    DTIC Science & Technology

    1983-09-01

    Institute of Technolgy Air University in Partial Fulfillment of the Requirements for the Degree of Master of Science by Gerald V. Flanagan, S.B. Lt. USAF...global minima [9]. An informal definition of convexity is that any two points in the space can be connected by a straight line which does not pass out of...question. A quick look at gradient information suggests that too few angles (2 for example) will make the laminate sensitive to small changes in

  9. Self-Organizing Hierarchical Particle Swarm Optimization with Time-Varying Acceleration Coefficients for Economic Dispatch with Valve Point Effects and Multifuel Options

    NASA Astrophysics Data System (ADS)

    Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc

    2011-06-01

    This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.

  10. LDRD Final Report: Global Optimization for Engineering Science Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.

    1999-12-01

    For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.

  11. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  12. Protective ventilation in experimental acute respiratory distress syndrome after ventilator-induced lung injury: a randomized controlled trial.

    PubMed

    Uttman, L; Bitzén, U; De Robertis, E; Enoksson, J; Johansson, L; Jonson, B

    2012-10-01

    Low tidal volume (V(T)), PEEP, and low plateau pressure (P(PLAT)) are lung protective during acute respiratory distress syndrome (ARDS). This study tested the hypothesis that the aspiration of dead space (ASPIDS) together with computer simulation can help maintain gas exchange at these settings, thus promoting protection of the lungs. ARDS was induced in pigs using surfactant perturbation plus an injurious ventilation strategy. One group then underwent 24 h protective ventilation, while control groups were ventilated using a conventional ventilation strategy at either high or low pressure. Pressure-volume curves (P(el)/V), blood gases, and haemodynamics were studied at 0, 4, 8, 16, and 24 h after the induction of ARDS and lung histology was evaluated. The P(el)/V curves showed improvements in the protective strategy group and deterioration in both control groups. In the protective group, when respiratory rate (RR) was ≈ 60 bpm, better oxygenation and reduced shunt were found. Histological damage was significantly more severe in the high-pressure group. There were no differences in venous oxygen saturation and pulmonary vascular resistance between the groups. The protective ventilation strategy of adequate pH or PaCO2 with minimal V(T), and high/safe P(PLAT) resulting in high PEEP was based on the avoidance of known lung-damaging phenomena. The approach is based upon the optimization of V(T), RR, PEEP, I/E, and dead space. This study does not lend itself to conclusions about the independent role of each of these features. However, dead space reduction is fundamental for achieving minimal V(T) at high RR. Classical physiology is applicable at high RR. Computer simulation optimizes ventilation and limiting of dead space using ASPIDS. Inspiratory P(el)/V curves recorded from PEEP or, even better, expiratory P(el)/V curves allow monitoring in ARDS.

  13. The optical antenna system design research on earth integrative network laser link in the future

    NASA Astrophysics Data System (ADS)

    Liu, Xianzhu; Fu, Qiang; He, Jingyi

    2014-11-01

    Earth integrated information network can be real-time acquisition, transmission and processing the spatial information with the carrier based on space platforms, such as geostationary satellites or in low-orbit satellites, stratospheric balloons or unmanned and manned aircraft, etc. It is an essential infrastructure for China to constructed earth integrated information network. Earth integrated information network can not only support the highly dynamic and the real-time transmission of broadband down to earth observation, but the reliable transmission of the ultra remote and the large delay up to the deep space exploration, as well as provide services for the significant application of the ocean voyage, emergency rescue, navigation and positioning, air transportation, aerospace measurement or control and other fields.Thus the earth integrated information network can expand the human science, culture and productive activities to the space, ocean and even deep space, so it is the global research focus. The network of the laser communication link is an important component and the mean of communication in the earth integrated information network. Optimize the structure and design the system of the optical antenna is considered one of the difficulty key technologies for the space laser communication link network. Therefore, this paper presents an optical antenna system that it can be used in space laser communication link network.The antenna system was consisted by the plurality mirrors stitched with the rotational paraboloid as a substrate. The optical system structure of the multi-mirror stitched was simulated and emulated by the light tools software. Cassegrain form to be used in a relay optical system. The structural parameters of the relay optical system was optimized and designed by the optical design software of zemax. The results of the optimal design and simulation or emulation indicated that the antenna system had a good optical performance and a certain reference value in engineering. It can provide effective technical support to realize interconnection of earth integrated laser link information network in the future.

  14. Global effectiveness of group decision-making strategies in coping with forage and price variabilities in commercial rangelands: A modelling assessment.

    PubMed

    Ibáñez, Javier; Martínez-Valderrama, Jaime

    2018-07-01

    This paper presents a modelling study that evaluated the global effectiveness of a range of group decision-making strategies for commercial farming areas in rangelands affected by temporal variations in forage production. The assessment utilised an integrated system dynamics model (86 equations) to examine the broad and long-term group decision outcomes. This model considers aspects usually neglected in related studies, such as the dynamics of the main local prices, the dynamics of the number of active farmers, the supplementary feeding of livestock, and certain behavioural traits of farmers and traders. The assessment procedure was based on an analysis of the outcomes of the model under 330,000 simulation scenarios. The results indicated that only if all the farmers in an area are either opportunistic or conservative that is, are either responsive or unresponsive to expected profits, the exploitation of the grazing resources were optimal in some senses. A widespread opportunism proved optimal only from an economic viewpoint. However, it is very unlikely that most of the farmers would agree to be opportunistic in practice. By contrast, a widespread conservatism, which in principle is perfectly feasible, proved optimal from economic, social, and ecological perspectives. Notably, it was found that the presence of a relatively small number of opportunistic farmers would suffice to considerably reduce the economic results of widespread conservatism. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Research and development activities in unified control-structure modeling and design

    NASA Technical Reports Server (NTRS)

    Nayak, A. P.

    1985-01-01

    Results of work sponsored by JPL and other organizations to develop a unified control/structures modeling and design capability for large space structures is presented. Recent analytical results are presented to demonstrate the significant interdependence between structural and control properties. A new design methodology is suggested in which the structure, material properties, dynamic model and control design are all optimized simultaneously. The development of a methodology for global design optimization is recommended as a long term goal. It is suggested that this methodology should be incorporated into computer aided engineering programs, which eventually will be supplemented by an expert system to aid design optimization. Recommendations are also presented for near term research activities at JPL. The key recommendation is to continue the development of integrated dynamic modeling/control design techniques, with special attention given to the development of structural models specially tailored to support design.

  16. The space shuttle payload planning working groups. Volume 1: Astronomy

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The space astronomy missions to be accomplished by the space shuttle are discussed. The principal instrument is the Large Space Telescope optimized for the ultraviolet and visible regions of the spectrum, but usable also in the infrared. Two infrared telescopes are also proposed and their characteristics are described. Other instruments considered for the astronomical observations are: (1) a very wide angle ultraviolet camera, (2) a grazing incidence telescope, (3) Explorer-class free flyers to measure the cosmic microwave background, and (4) rocket-class instruments which can fly frequently on a variety of missions. The stability requirements of the space shuttle for accomplishing the astronomy mission are defined.

  17. Genetically Engineered Microelectronic Infrared Filters

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    1998-01-01

    A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

  18. Internationally coordinated multi-mission planning is now critical to sustain the space-based rainfall observations needed for managing floods globally

    NASA Astrophysics Data System (ADS)

    Reed, Patrick M.; Chaney, Nathaniel W.; Herman, Jonathan D.; Ferringer, Matthew P.; Wood, Eric F.

    2015-02-01

    At present 4 of 10 dedicated rainfall observing satellite systems have exceeded their design life, some by more than a decade. Here, we show operational implications for flood management of a ‘collapse’ of space-based rainfall observing infrastructure as well as the high-value opportunities for a globally coordinated portfolio of satellite missions and data services. Results show that the current portfolio of rainfall missions fails to meet operational data needs for flood management, even when assuming a perfectly coordinated data product from all current rainfall-focused missions (i.e., the full portfolio). In the full portfolio, satellite-based rainfall data deficits vary across the globe and may preclude climate adaptation in locations vulnerable to increasing flood risks. Moreover, removing satellites that are currently beyond their design life (i.e., the reduced portfolio) dramatically increases data deficits globally and could cause entire high intensity flood events to be unobserved. Recovery from the reduced portfolio is possible with internationally coordinated replenishment of as few as 2 of the 4 satellite systems beyond their design life, yielding rainfall data coverages that outperform the current full portfolio (i.e., an optimized portfolio of eight satellites can outperform ten satellites). This work demonstrates the potential for internationally coordinated satellite replenishment and data services to substantially enhance the cost-effectiveness, sustainability and operational value of space-based rainfall observations in managing evolving flood risks.

  19. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  20. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  1. Leveraging human decision making through the optimal management of centralized resources

    NASA Astrophysics Data System (ADS)

    Hyden, Paul; McGrath, Richard G.

    2016-05-01

    Combining results from mixed integer optimization, stochastic modeling and queuing theory, we will advance the interdisciplinary problem of efficiently and effectively allocating centrally managed resources. Academia currently fails to address this, as the esoteric demands of each of these large research areas limits work across traditional boundaries. The commercial space does not currently address these challenges due to the absence of a profit metric. By constructing algorithms that explicitly use inputs across boundaries, we are able to incorporate the advantages of using human decision makers. Key improvements in the underlying algorithms are made possible by aligning decision maker goals with the feedback loops introduced between the core optimization step and the modeling of the overall stochastic process of supply and demand. A key observation is that human decision-makers must be explicitly included in the analysis for these approaches to be ultimately successful. Transformative access gives warfighters and mission owners greater understanding of global needs and allows for relationships to guide optimal resource allocation decisions. Mastery of demand processes and optimization bottlenecks reveals long term maximum marginal utility gaps in capabilities.

  2. An improved parent-centric mutation with normalized neighborhoods for inducing niching behavior in differential evolution.

    PubMed

    Biswas, Subhodip; Kundu, Souvik; Das, Swagatam

    2014-10-01

    In real life, we often need to find multiple optimally sustainable solutions of an optimization problem. Evolutionary multimodal optimization algorithms can be very helpful in such cases. They detect and maintain multiple optimal solutions during the run by incorporating specialized niching operations in their actual framework. Differential evolution (DE) is a powerful evolutionary algorithm (EA) well-known for its ability and efficiency as a single peak global optimizer for continuous spaces. This article suggests a niching scheme integrated with DE for achieving a stable and efficient niching behavior by combining the newly proposed parent-centric mutation operator with synchronous crowding replacement rule. The proposed approach is designed by considering the difficulties associated with the problem dependent niching parameters (like niche radius) and does not make use of such control parameter. The mutation operator helps to maintain the population diversity at an optimum level by using well-defined local neighborhoods. Based on a comparative study involving 13 well-known state-of-the-art niching EAs tested on an extensive collection of benchmarks, we observe a consistent statistical superiority enjoyed by our proposed niching algorithm.

  3. A surrogate-based metaheuristic global search method for beam angle selection in radiation treatment planning

    PubMed Central

    Zhang, H H; Gao, S; Chen, W; Shi, L; D’Souza, W D; Meyer, R R

    2013-01-01

    An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equally-spaced beams (eplans), we have developed a global search metaheuristic process based on the Nested Partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are superior quality. PMID:23459411

  4. Global optimization and reflectivity data fitting for x-ray multilayer mirrors by means of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Pareschi, Giovanni

    2001-01-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thicknesses, densities, roughness). Non-linear fitting of experimental data with simulations requires to use initial values sufficiently close to the optimum value. This is a difficult task when the space topology of the variables is highly structured, as in our case. The application of global optimization methods to fit multilayer reflectivity data is presented. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (e.g. selection, crossover, mutation) on the members of the parent generation. The pressure of selection drives the population to include 'good' individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C multilayers recorded at the ESRF BM5 are presented. This method could be also applied to the help in the design of multilayers optimized for a target application, like for an astronomical grazing-incidence hard X-ray telescopes.

  5. Improvement of Automated POST Case Success Rate Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Zwack, Matthew R.; Dees, Patrick D.

    2017-01-01

    During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal [1]. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases [2]. Additional work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points [3]. The conclusion of the previous work illustrated the utility of the graph theory approach for completing a DOE through POST. However, this approach was still dependent upon the use of random repetitions to generate seed points for the graph. As noted in [3], only 8% of these random repetitions resulted in converged trajectories. This ultimately affects the ability of the random reps method to confidently approach the global optima for a given vehicle case in a reasonable amount of time. With only an 8% pass rate, tens or hundreds of thousands of reps may be needed to be confident that the best repetition is at least close to the global optima. However, typical design study time constraints require that fewer repetitions be attempted, sometimes resulting in seed points that have only a handful of successful completions. If a small number of successful repetitions are used to generate a seed point, the graph method may inherit some inaccuracies as it chains DOE cases from the non-global-optimal seed points. This creates inherent noise in the graph data, which can limit the accuracy of the resulting surrogate models. For this reason, the goal of this work is to improve the seed point generation method and ultimately the accuracy of the resulting POST surrogate model. The work focuses on increasing the case pass rate for seed point generation.

  6. Why do adults with dyslexia have poor global motion sensitivity?

    PubMed

    Conlon, Elizabeth G; Lilleskaret, Gry; Wright, Craig M; Stuksrud, Anne

    2013-01-01

    Two experiments aimed to determine why adults with dyslexia have higher global motion thresholds than typically reading controls. In Experiment 1, the dot density and number of animation frames presented in the dot stimulus were manipulated because of findings that use of a high dot density can normalize coherence thresholds in individuals with dyslexia. Dot densities were 14.15 and 3.54 dots/deg(2). These were presented for five (84 ms) or eight (134 ms) frames. The dyslexia group had higher coherence thresholds in all conditions than controls. However, in the high dot density, long duration condition, both reader groups had the lowest thresholds indicating normal temporal recruitment. These results indicated that the dyslexia group could sample the additional signals dots over space and then integrate these with the same efficiency as controls. In Experiment 2, we determined whether briefly presenting a fully coherent prime moving in either the same or opposite direction of motion to a partially coherent test stimulus would systematically increase and decrease global motion thresholds in the reader groups. When the direction of motion in the prime and test was the same, global motion thresholds increased for both reader groups. The increase in coherence thresholds was significantly greater for the dyslexia group. When the motion of the prime and test were presented in opposite directions, coherence thresholds were reduced in both groups. No group threshold differences were found. We concluded that the global motion processing deficit found in adults with dyslexia can be explained by undersampling of the target motion signals. This might occur because of difficulties directing attention to the relevant motion signals in the random dot pattern, and not a specific difficulty integrating global motion signals. These effects are most likely to occur in the group with dyslexia when more complex computational processes are required to process global motion.

  7. Why do adults with dyslexia have poor global motion sensitivity?

    PubMed Central

    Conlon, Elizabeth G.; Lilleskaret, Gry; Wright, Craig M.; Stuksrud, Anne

    2013-01-01

    Two experiments aimed to determine why adults with dyslexia have higher global motion thresholds than typically reading controls. In Experiment 1, the dot density and number of animation frames presented in the dot stimulus were manipulated because of findings that use of a high dot density can normalize coherence thresholds in individuals with dyslexia. Dot densities were 14.15 and 3.54 dots/deg2. These were presented for five (84 ms) or eight (134 ms) frames. The dyslexia group had higher coherence thresholds in all conditions than controls. However, in the high dot density, long duration condition, both reader groups had the lowest thresholds indicating normal temporal recruitment. These results indicated that the dyslexia group could sample the additional signals dots over space and then integrate these with the same efficiency as controls. In Experiment 2, we determined whether briefly presenting a fully coherent prime moving in either the same or opposite direction of motion to a partially coherent test stimulus would systematically increase and decrease global motion thresholds in the reader groups. When the direction of motion in the prime and test was the same, global motion thresholds increased for both reader groups. The increase in coherence thresholds was significantly greater for the dyslexia group. When the motion of the prime and test were presented in opposite directions, coherence thresholds were reduced in both groups. No group threshold differences were found. We concluded that the global motion processing deficit found in adults with dyslexia can be explained by undersampling of the target motion signals. This might occur because of difficulties directing attention to the relevant motion signals in the random dot pattern, and not a specific difficulty integrating global motion signals. These effects are most likely to occur in the group with dyslexia when more complex computational processes are required to process global motion. PMID:24376414

  8. Lead optimization in the nondrug-like space.

    PubMed

    Zhao, Hongyu

    2011-02-01

    Drug-like space might be more densely populated with orally available compounds than the remaining chemical space, but lead optimization can still occur outside this space. Oral drug space is more dynamic than the relatively static drug-like space. As new targets emerge and optimization tools advance the oral drug space might expand. Lead optimization protocols are becoming more complex with greater optimization needs to be satisfied, which consequently could change the role of drug-likeness in the process. Whereas drug-like space should usually be explored preferentially, it can be easier to find oral drugs for certain targets in the nondrug-like space. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Preliminary Analysis of Low-Thrust Gravity Assist Trajectories by An Inverse Method and a Global Optimization Technique.

    NASA Astrophysics Data System (ADS)

    de Pascale, P.; Vasile, M.; Casotto, S.

    The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.

  10. Investigating Pharmacological Similarity by Charting Chemical Space.

    PubMed

    Buonfiglio, Rosa; Engkvist, Ola; Várkonyi, Péter; Henz, Astrid; Vikeved, Elisabet; Backlund, Anders; Kogej, Thierry

    2015-11-23

    In this study, biologically relevant areas of the chemical space were analyzed using ChemGPS-NP. This application enables comparing groups of ligands within a multidimensional space based on principle components derived from physicochemical descriptors. Also, 3D visualization of the ChemGPS-NP global map can be used to conveniently evaluate bioactive compound similarity and visually distinguish between different types or groups of compounds. To further establish ChemGPS-NP as a method to accurately represent the chemical space, a comparison with structure-based fingerprint has been performed. Interesting complementarities between the two descriptions of molecules were observed. It has been shown that the accuracy of describing molecules with physicochemical descriptors like in ChemGPS-NP is similar to the accuracy of structural fingerprints in retrieving bioactive molecules. Lastly, pharmacological similarity of structurally diverse compounds has been investigated in ChemGPS-NP space. These results further strengthen the case of using ChemGPS-NP as a tool to explore and visualize chemical space.

  11. Mapping of Drug-like Chemical Universe with Reduced Complexity Molecular Frameworks.

    PubMed

    Kontijevskis, Aleksejs

    2017-04-24

    The emergence of the DNA-encoded chemical libraries (DEL) field in the past decade has attracted the attention of the pharmaceutical industry as a powerful mechanism for the discovery of novel drug-like hits for various biological targets. Nuevolution Chemetics technology enables DNA-encoded synthesis of billions of chemically diverse drug-like small molecule compounds, and the efficient screening and optimization of these, facilitating effective identification of drug candidates at an unprecedented speed and scale. Although many approaches have been developed by the cheminformatics community for the analysis and visualization of drug-like chemical space, most of them are restricted to the analysis of a maximum of a few millions of compounds and cannot handle collections of 10 8 -10 12 compounds typical for DELs. To address this big chemical data challenge, we developed the Reduced Complexity Molecular Frameworks (RCMF) methodology as an abstract and very general way of representing chemical structures. By further introducing RCMF descriptors, we constructed a global framework map of drug-like chemical space and demonstrated how chemical space occupied by multi-million-member drug-like Chemetics DNA-encoded libraries and virtual combinatorial libraries with >10 12 members could be analyzed and mapped without a need for library enumeration. We further validate the approach by performing RCMF-based searches in a drug-like chemical universe and mapping Chemetics library selection outputs for LSD1 targets on a global framework chemical space map.

  12. Global Snow from Space: Development of a Satellite-based, Terrestrial Snow Mission Planning Tool

    NASA Astrophysics Data System (ADS)

    Forman, B. A.; Kumar, S.; LeMoigne, J.; Nag, S.

    2017-12-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASA's Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT-C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical permutation. One objective of the proposed observing system simulation experiment (OSSE) is to assess the complementary - or perhaps contradictory - information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include: What observational records are needed (in space and time) to maximize terrestrial snow experimental utility? How might observations be coordinated (in space and time) to maximize this utility? What is the additional utility associated with an additional observation? How can future mission costs be minimized while ensuring Science requirements are fulfilled?

  13. Seasonal and Inter-Annual Changes in the Distribution of Dominant Phytoplancton Groups in the Global Ocean

    NASA Astrophysics Data System (ADS)

    Severine, A.; Cyril, M.; Yves, D.; Laurent, B.; Hubert, L.

    2006-12-01

    The fate of fixed organic carbon in the ocean strongly varies with the phytoplankton group that makes photosynthesis. The monitoring of phytoplankton groups in the global ocean is thus of primary importance to evaluate and improve ocean carbon models. A new method (PHYSAT; Alvain et al., 2005) enables to distinguish between four different groups from space using SeaWiFS ocean color measurements. In addition to these four initial phytoplankton groups, which are diatoms, Prochlorococcus, Synecochoccus and haptophytes, we show that PHYSAT is also capable of identifying blooms of phaeocystis and coccolithophorids. Daily global SeaWiFS level-3 data from September 1997 to December 2004 were processed using PHYSAT. We present here the first monthly mean global climatology of the dominant phytoplankton groups. The seasonal cycle is discussed, with particular emphasis on the succession of phytoplankton groups during the North Atlantic spring bloom and on the coexistence of large phaeocystis and diatoms blooms during winter in the Austral Ocean. We also present the inter-annual variability for the 1998-2004 period. The contribution of diatoms to the total chlorophyll is highly variable (up to a factor of two) from one year to the other in both Atlantic and Austral Oceans, suggesting a significant variability in organic carbon export by diatoms in these regions. On the opposite, the phaeocystis contribution is less variable in the Austral Ocean.

  14. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    PubMed

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features

    PubMed Central

    Toews, Matthew; Wells, William M.

    2013-01-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a-posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. PMID:23265799

  16. Accelerating atomic structure search with cluster regularization

    NASA Astrophysics Data System (ADS)

    Sørensen, K. H.; Jørgensen, M. S.; Bruix, A.; Hammer, B.

    2018-06-01

    We present a method for accelerating the global structure optimization of atomic compounds. The method is demonstrated to speed up the finding of the anatase TiO2(001)-(1 × 4) surface reconstruction within a density functional tight-binding theory framework using an evolutionary algorithm. As a key element of the method, we use unsupervised machine learning techniques to categorize atoms present in a diverse set of partially disordered surface structures into clusters of atoms having similar local atomic environments. Analysis of more than 1000 different structures shows that the total energy of the structures correlates with the summed distances of the atomic environments to their respective cluster centers in feature space, where the sum runs over all atoms in each structure. Our method is formulated as a gradient based minimization of this summed cluster distance for a given structure and alternates with a standard gradient based energy minimization. While the latter minimization ensures local relaxation within a given energy basin, the former enables escapes from meta-stable basins and hence increases the overall performance of the global optimization.

  17. Functional equations for orbifold wreath products

    NASA Astrophysics Data System (ADS)

    Farsi, Carla; Seaton, Christopher

    2017-10-01

    We present generating functions for extensions of multiplicative invariants of wreath symmetric products of orbifolds presented as the quotient by the locally free action of a compact, connected Lie group in terms of orbifold sector decompositions. Particularly interesting instances of these product formulas occur for the Euler and Euler-Satake characteristics, which we compute for a class of weighted projective spaces. This generalizes results known for global quotients by finite groups to all closed, effective orbifolds. We also describe a combinatorial approach to extensions of multiplicative invariants using decomposable functors that recovers the formula for the Euler-Satake characteristic of a wreath product of a global quotient orbifold.

  18. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  19. Delineation and geometric modeling of road networks

    NASA Astrophysics Data System (ADS)

    Poullis, Charalambos; You, Suya

    In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.

  20. Optimal fold symmetry of LH2 rings on a photosynthetic membrane

    PubMed Central

    Cleary, Liam; Chen, Hang; Chuang, Chern; Silbey, Robert J.; Cao, Jianshu

    2013-01-01

    An intriguing observation of photosynthetic light-harvesting systems is the N-fold symmetry of light-harvesting complex 2 (LH2) of purple bacteria. We calculate the optimal rotational configuration of N-fold rings on a hexagonal lattice and establish two related mechanisms for the promotion of maximum excitation energy transfer (EET). (i) For certain fold numbers, there exist optimal basis cells with rotational symmetry, extendable to the entire lattice for the global optimization of the EET network. (ii) The type of basis cell can reduce or remove the frustration of EET rates across the photosynthetic network. We find that the existence of a basis cell and its type are directly related to the number of matching points S between the fold symmetry and the hexagonal lattice. The two complementary mechanisms provide selection criteria for the fold number and identify groups of consecutive numbers. Remarkably, one such group consists of the naturally occurring 8-, 9-, and 10-fold rings. By considering the inter-ring distance and EET rate, we demonstrate that this group can achieve minimal rotational sensitivity in addition to an optimal packing density, achieving robust and efficient EET. This corroborates our findings i and ii and, through their direct relation to S, suggests the design principle of matching the internal symmetry with the lattice order. PMID:23650366

  1. Optimal fold symmetry of LH2 rings on a photosynthetic membrane.

    PubMed

    Cleary, Liam; Chen, Hang; Chuang, Chern; Silbey, Robert J; Cao, Jianshu

    2013-05-21

    An intriguing observation of photosynthetic light-harvesting systems is the N-fold symmetry of light-harvesting complex 2 (LH2) of purple bacteria. We calculate the optimal rotational configuration of N-fold rings on a hexagonal lattice and establish two related mechanisms for the promotion of maximum excitation energy transfer (EET). (i) For certain fold numbers, there exist optimal basis cells with rotational symmetry, extendable to the entire lattice for the global optimization of the EET network. (ii) The type of basis cell can reduce or remove the frustration of EET rates across the photosynthetic network. We find that the existence of a basis cell and its type are directly related to the number of matching points S between the fold symmetry and the hexagonal lattice. The two complementary mechanisms provide selection criteria for the fold number and identify groups of consecutive numbers. Remarkably, one such group consists of the naturally occurring 8-, 9-, and 10-fold rings. By considering the inter-ring distance and EET rate, we demonstrate that this group can achieve minimal rotational sensitivity in addition to an optimal packing density, achieving robust and efficient EET. This corroborates our findings i and ii and, through their direct relation to S, suggests the design principle of matching the internal symmetry with the lattice order.

  2. Optimal distance of multi-plane sensor in three-dimensional electrical impedance tomography.

    PubMed

    Hao, Zhenhua; Yue, Shihong; Sun, Benyuan; Wang, Huaxiang

    2017-12-01

    Electrical impedance tomography (EIT) is a visual imaging technique for obtaining the conductivity and permittivity distributions in the domain of interest. As an advanced technique, EIT has the potential to be a valuable tool for continuously bedside monitoring of pulmonary function. The EIT applications in any three-dimensional (3 D) field are very limited to the 3 D effects, i.e. the distribution of electric field spreads far beyond the electrode plane. The 3 D effects can result in measurement errors and image distortion. An important way to overcome the 3 D effect is to use the multiple groups of sensors. The aim of this paper is to find the best space resolution of EIT image over various electrode planes and select an optimal plane spacing in a 3 D EIT sensor, and provide guidance for 3 D EIT electrodes placement in monitoring lung function. In simulation and experiment, several typical conductivity distribution models, such as one rod (central, midway and edge), two rods and three rods, are set at different plane spacings between the two electrode planes. A Tikhonov regularization algorithm is utilized for reconstructing the images; the relative error and the correlation coefficient are utilized for evaluating the image quality. Based on numerical simulation and experimental results, the image performance at different spacing conditions is evaluated. The results demonstrate that there exists an optimal plane spacing between the two electrode planes for 3 D EIT sensor. And then the selection of the optimal plane spacing between the electrode planes is suggested for the electrodes placement of multi-plane EIT sensor.

  3. Assessing Space Exploration Technology Requirements as a First Step Towards Ensuring Technology Readiness for International Cooperation in Space Exploration

    NASA Technical Reports Server (NTRS)

    Laurini, Kathleen C.; Hufenbach, Bernhard; Satoh, Maoki; Piedboeuf, Jean-Claude; Neumann, Benjamin

    2010-01-01

    Advancing critical and enhancing technologies is considered essential to enabling sustainable and affordable human space exploration. Critical technologies are those that enable a certain class of mission, such as technologies necessary for safe landing on the Martian surface, advanced propulsion, and closed loop life support. Others enhance the mission by leading to a greater satisfaction of mission objectives or increased probability of mission success. Advanced technologies are needed to reduce mass and cost. Many space agencies have studied exploration mission architectures and scenarios with the resulting lists of critical and enhancing technologies being very similar. With this in mind, and with the recognition that human space exploration will only be enabled by agencies working together to address these challenges, interested agencies participating in the International Space Exploration Coordination Group (ISECG) have agreed to perform a technology assessment as an important step in exploring cooperation opportunities for future exploration mission scenarios. "The Global Exploration Strategy: The Framework for Coordination" was developed by fourteen space agencies and released in May 2007. Since the fall of 2008, several International Space Exploration Coordination Group (ISECG) participating space agencies have been studying concepts for human exploration of the moon. They have identified technologies considered critical and enhancing of sustainable space exploration. Technologies such as in-situ resource utilization, advanced power generation/energy storage systems, reliable dust resistant mobility systems, and closed loop life support systems are important examples. Similarly, agencies such as NASA, ESA, and Russia have studied Mars exploration missions and identified critical technologies. They recognize that human and robotic precursor missions to destinations such as LEO, moon, and near earth objects provide opportunities to demonstrate the technologies needed for Mars mission. Agencies see the importance of assessing gaps and overlaps in their plans to advance technologies in order to leverage their investments and enable exciting missions as soon as practical. They see the importance of respecting the ability of any agency to invest in any technologies considered interesting or strategic. This paper will describe the importance of developing an appropriate international strategy for technology development and ideas for effective mechanisms for advancing an international strategy. This work will both inform and be informed by the development of an ISECG Global Exploration Roadmap and serve as a concrete step forward in advancing the Global Exploration Strategy.

  4. Accessing global data from accelerator devices

    DOEpatents

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.

    2016-12-06

    An aspect includes a table of contents (TOC) that was generated by a compiler being received at an accelerator device. The TOC includes an address of global data in a host memory space. The global data is copied from the address in the host memory space to an address in the device memory space. The address in the host memory space is obtained from the received TOC. The received TOC is updated to indicate that global data is stored at the address in the device memory space. A kernel that accesses the global data from the address in the device memory space is executed. The address in the device memory space is obtained based on contents of the updated TOC. When the executing is completed, the global data from the address in the device memory space is copied to the address in the host memory space.

  5. Accessing global data from accelerator devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.

    2016-12-06

    An aspect includes a table of contents (TOC) that was generated by a compiler being received at an accelerator device. The TOC includes an address of global data in a host memory space. The global data is copied from the address in the host memory space to an address in the device memory space. The address in the host memory space is obtained from the received TOC. The received TOC is updated to indicate that global data is stored at the address in the device memory space. A kernel that accesses the global data from the address in the devicemore » memory space is executed. The address in the device memory space is obtained based on contents of the updated TOC. When the executing is completed, the global data from the address in the device memory space is copied to the address in the host memory space.« less

  6. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2011-12-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  7. Uncertainty Quantification and Parameter Tuning: A Case Study of Convective Parameterization Scheme in the WRF Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.

    2012-04-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  8. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2012-03-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  9. Research in Satellite-Fiber Network Interoperability

    NASA Technical Reports Server (NTRS)

    Edelson, Burt

    1997-01-01

    This four part report evaluated the performance of high data rate transmission links using the ACTS satellite, and to provide a preparatory test framework for two of the space science applications that have been approved for tests and demonstrations as part of the overall ACTS program. The test plan will provide guidance and information necessary to find the optimal values of the transmission parameters and then apply these parameters to specific applications. The first part will focus on the satellite-to-earth link. The second part is a set of tests to study the performance of ATM on the ACTS channel. The third and fourth parts of the test plan will cover the space science applications, Global Climate Modeling and Keck Telescope Acquisition Modeling and Control.

  10. Optimal Time-decay Estimates for the Compressible Navier-Stokes Equations in the Critical L p Framework

    NASA Astrophysics Data System (ADS)

    Danchin, Raphaël; Xu, Jiang

    2017-04-01

    The global existence issue for the isentropic compressible Navier-Stokes equations in the critical regularity framework was addressed in Danchin (Invent Math 141(3):579-614, 2000) more than 15 years ago. However, whether (optimal) time-decay rates could be shown in critical spaces has remained an open question. Here we give a positive answer to that issue not only in the L 2 critical framework of Danchin (Invent Math 141(3):579-614, 2000) but also in the general L p critical framework of Charve and Danchin (Arch Ration Mech Anal 198(1):233-271, 2010), Chen et al. (Commun Pure Appl Math 63(9):1173-1224, 2010), Haspot (Arch Ration Mech Anal 202(2):427-460, 2011): we show that under a mild additional decay assumption that is satisfied if, for example, the low frequencies of the initial data are in {L^{p/2}(Rd)}, the L p norm (the slightly stronger dot B^0_{p,1} norm in fact) of the critical global solutions decays like t^{-d(1/p - 1/4} for {tto+∞,} exactly as firstly observed by Matsumura and Nishida in (Proc Jpn Acad Ser A 55:337-342, 1979) in the case p = 2 and d = 3, for solutions with high Sobolev regularity. Our method relies on refined time weighted inequalities in the Fourier space, and is likely to be effective for other hyperbolic/parabolic systems that are encountered in fluid mechanics or mathematical physics.

  11. A comparison of automated dispensing cabinet optimization methods.

    PubMed

    O'Neil, Daniel P; Miller, Adam; Cronin, Daniel; Hatfield, Chad J

    2016-07-01

    Results of a study comparing two methods of optimizing automated dispensing cabinets (ADCs) are reported. Eight nonprofiled ADCs were optimized over six months. Optimization of each cabinet involved three steps: (1) removal of medications that had not been dispensed for at least 180 days, (2) movement of ADC stock to better suit end-user needs and available space, and (3) adjustment of par levels (desired on-hand inventory levels). The par levels of four ADCs (the Day Supply group) were adjusted according to average daily usage; the par levels of the other four ADCs (the Formula group) were adjusted using a standard inventory formula. The primary outcome was the vend:fill ratio, while secondary outcomes included total inventory, inventory cost, quantity of expired medications, and ADC stockout percentage. The total number of medications stocked in the eight machines was reduced from 1,273 in a designated two-month preoptimization period to 1,182 in a designated two-month postoptimization period, yielding a carrying cost savings of $44,981. The mean vend:fill ratios before and after optimization were 4.43 and 4.46, respectively. The vend:fill ratio for ADCs in the Formula group increased from 4.33 before optimization to 5.2 after optimization; in the Day Supply group, the ratio declined (from 4.52 to 3.90). The postoptimization interaction difference between the Formula and Day Supply groups was found to be significant (p = 0.0477). ADC optimization via a standard inventory formula had a positive impact on inventory costs, refills, vend:fill ratios, and stockout percentages. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  12. Analysis of delay reducing and fuel saving sequencing and spacing algorithms for arrival traffic

    NASA Technical Reports Server (NTRS)

    Neuman, Frank; Erzberger, Heinz

    1991-01-01

    The air traffic control subsystem that performs sequencing and spacing is discussed. The function of the sequencing and spacing algorithms is to automatically plan the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several algorithms are described and their statistical performance is examined. Sequencing brings order to an arrival sequence for aircraft. First-come-first-served sequencing (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the arriving traffic, gaps will remain in the sequence of aircraft. Delays are reduced by time-advancing the leading aircraft of each group while still preserving the FCFS order. Tightly spaced groups of aircraft remain with a mix of heavy and large aircraft. Spacing requirements differ for different types of aircraft trailing each other. Traffic is reordered slightly to take advantage of this spacing criterion, thus shortening the groups and reducing average delays. For heavy traffic, delays for different traffic samples vary widely, even when the same set of statistical parameters is used to produce each sample. This report supersedes NASA TM-102795 on the same subject. It includes a new method of time-advance as well as an efficient method of sequencing and spacing for two dependent runways.

  13. A stochastic algorithm for global optimization and for best populations: A test case of side chains in proteins

    PubMed Central

    Glick, Meir; Rayan, Anwar; Goldblum, Amiram

    2002-01-01

    The problem of global optimization is pivotal in a variety of scientific fields. Here, we present a robust stochastic search method that is able to find the global minimum for a given cost function, as well as, in most cases, any number of best solutions for very large combinatorial “explosive” systems. The algorithm iteratively eliminates variable values that contribute consistently to the highest end of a cost function's spectrum of values for the full system. Values that have not been eliminated are retained for a full, exhaustive search, allowing the creation of an ordered population of best solutions, which includes the global minimum. We demonstrate the ability of the algorithm to explore the conformational space of side chains in eight proteins, with 54 to 263 residues, to reproduce a population of their low energy conformations. The 1,000 lowest energy solutions are identical in the stochastic (with two different seed numbers) and full, exhaustive searches for six of eight proteins. The others retain the lowest 141 and 213 (of 1,000) conformations, depending on the seed number, and the maximal difference between stochastic and exhaustive is only about 0.15 Kcal/mol. The energy gap between the lowest and highest of the 1,000 low-energy conformers in eight proteins is between 0.55 and 3.64 Kcal/mol. This algorithm offers real opportunities for solving problems of high complexity in structural biology and in other fields of science and technology. PMID:11792838

  14. International Coordination of Exploring and Using Lunar Polar Volatiles

    NASA Technical Reports Server (NTRS)

    Gruener, J. E.; Suzuki, N. H.; Carpenter, J. D.

    2016-01-01

    Fourteen international space agencies are participating in the International Space Exploration Coordination Group (ISECG), working together to advance a long-range strategy for human and robotic space exploration beyond low earth orbit. The ISECG is a voluntary, non-binding international coordination mechanism through which individual agencies may exchange information regarding interests, objectives, and plans in space exploration with the goal of strengthening both individual exploration programs as well as the collective effort. The ISECG has developed a Global Exploration Roadmap (GER) that reflects the coordinated international dialog and continued preparation for exploration beyond low-Earth orbit, beginning with the Moon and cis-lunar space, and continuing to near-Earth asteroids, and Mars.

  15. Semi-Supervised Learning of Lift Optimization of Multi-Element Three-Segment Variable Camber Airfoil

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Nguyen, Nhan T.

    2017-01-01

    This chapter describes a new intelligent platform for learning optimal designs of morphing wings based on Variable Camber Continuous Trailing Edge Flaps (VCCTEF) in conjunction with a leading edge flap called the Variable Camber Krueger (VCK). The new platform consists of a Computational Fluid Dynamics (CFD) methodology coupled with a semi-supervised learning methodology. The CFD component of the intelligent platform comprises of a full Navier-Stokes solution capability (NASA OVERFLOW solver with Spalart-Allmaras turbulence model) that computes flow over a tri-element inboard NASA Generic Transport Model (GTM) wing section. Various VCCTEF/VCK settings and configurations were considered to explore optimal design for high-lift flight during take-off and landing. To determine globally optimal design of such a system, an extremely large set of CFD simulations is needed. This is not feasible to achieve in practice. To alleviate this problem, a recourse was taken to a semi-supervised learning (SSL) methodology, which is based on manifold regularization techniques. A reasonable space of CFD solutions was populated and then the SSL methodology was used to fit this manifold in its entirety, including the gaps in the manifold where there were no CFD solutions available. The SSL methodology in conjunction with an elastodynamic solver (FiDDLE) was demonstrated in an earlier study involving structural health monitoring. These CFD-SSL methodologies define the new intelligent platform that forms the basis for our search for optimal design of wings. Although the present platform can be used in various other design and operational problems in engineering, this chapter focuses on the high-lift study of the VCK-VCCTEF system. Top few candidate design configurations were identified by solving the CFD problem in a small subset of the design space. The SSL component was trained on the design space, and was then used in a predictive mode to populate a selected set of test points outside of the given design space. The new design test space thus populated was evaluated by using the CFD component by determining the error between the SSL predictions and the true (CFD) solutions, which was found to be small. This demonstrates the proposed CFD-SSL methodologies for isolating the best design of the VCK-VCCTEF system, and it holds promise for quantitatively identifying best designs of flight systems, in general.

  16. Hydrological Climate Classification: Can We Improve on Köppen-Geiger?

    NASA Astrophysics Data System (ADS)

    Knoben, W.; Woods, R. A.; Freer, J. E.

    2017-12-01

    Classification is essential in the study of complex natural systems, yet hydrology so far has no formal way to structure the climate forcing which underlies hydrologic response. Various climate classification systems can be borrowed from other disciplines but these are based on different organizing principles than a hydrological classification might use. From gridded global data we calculate a gridded aridity index, an aridity seasonality index and a rain-vs-snow index, which we use to cluster global locations into climate groups. We then define the membership degree of nearly 1100 catchments to each of our climate groups based on each catchment's climate and investigate the extent to which streamflow responses within each climate group are similar. We compare this climate classification approach with the often-used Köppen-Geiger classification, using statistical tests based on streamflow signature values. We find that three climate indices are sufficient to distinguish 18 different climate types world-wide. Climates tend to change gradually in space and catchments can thus belong to multiple climate groups, albeit with different degrees of membership. Streamflow responses within a climate group tend to be similar, regardless of the catchments' geographical proximity. A Wilcoxon two-sample test based on streamflow signature values for each climate group shows that the new classification can distinguish different flow regimes using this classification scheme. The Köppen-Geiger approach uses 29 climate classes but is less able to differentiate streamflow regimes. Climate forcing exerts a strong control on typical hydrologic response and both change gradually in space. This makes arbitrary hard boundaries in any classification scheme difficult to defend. Any hydrological classification should thus acknowledge these gradual changes in forcing. Catchment characteristics (soil or vegetation type, land use, etc) can vary more quickly in space than climate does, which can explain streamflow differences between geographically close locations. Summarizing, this work shows that hydrology needs its own way to structure climate forcing, acknowledging that climates vary gradually on a global scale and explicitly including those climate aspects that drive seasonal changes in hydrologic regimes.

  17. A satellite constellation optimization for a regional GNSS remote sensing mission

    NASA Astrophysics Data System (ADS)

    Gavili Kilaneh, Narin; Mashhadi Hossainali, Masoud

    2017-04-01

    Due to the recent advances in the Global Navigation Satellite System Remote sensing (GNSS¬R) applications, optimization of a satellite orbit to investigate the Earth's properties seems significant. The comparison of the GNSS direct and reflected signals received by a Low Earth Orbit (LEO) satellite introduces a new technique to remotely sense the Earth. Several GNSS¬R missions including Cyclone Global Navigation Satellite System (CYGNSS) have been proposed for different applications such as the ocean wind speed and height monitoring. The geometric optimization of the satellite orbit before starting the mission is a key step for every space mission. Since satellite constellation design varies depending on the application, we have focused on the required geometric criteria for oceanography applications in a specified region. Here, the total number of specular points, their spatial distribution and the accuracy of their position are assumed to be sufficient for oceanography applications. Gleason's method is used to determine the position of specular points. We considered the 2-D lattice and 3-D lattice theory of flower constellation to survey whether a circular orbit or an elliptical one is suitable to improve the solution. Genetic algorithm is implemented to solve the problem. To check the visibility condition between the LEO and GPS satellites, the satellite initial state is propagated by a variable step size numerical integration method. Constellation orbit parameters achieved by optimization provide a better resolution and precession for the specular points in the study area of this research.

  18. Local dark energy: HST evidence from the vicinity of the M81/M82 galaxy group

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Karachentsev, I. D.; Kashibadze, O. G.; Makarov, D. I.; Teerikorpi, P.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.

    2007-10-01

    The Hubble Space Telescope observations of the nearby galaxy group M81/M82 and its vicinity indicate that the dynamics of the expansion outflow around the group is dominated by the antigravity of the dark energy background. The local density of dark energy in the area is estimated to be near the global dark energy density or perhaps exactly equal to it. This conclusion agrees well with our previous results for the Local Group vicinity and the vicinity of the Cen A/M83 group.

  19. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  20. In Women’s Eyes

    PubMed Central

    Orza, Luisa; Bass, Emily; Bell, Emma; Crone, E. Tyler; Damji, Nazneen; Dilmitis, Sophie; Tremlett, Liz; Aidarus, Nasra; Stevenson, Jacqui; Bensaid, Souhaila; Kenkem, Calorine; Ross, Gracia Violeta; Kudravtseva, Elena

    2017-01-01

    Abstract There is rightly a huge global effort to enable women living with HIV to have long productive lives, through treatment access. However, many women living with HIV experience violence against women (VAW), in both domestic and health care settings. The ways in which VAW might prevent treatment access and adherence for women has not to date been reviewed coherently at the global level, from women’s own perspectives. Meanwhile, funding for global health care, including HIV treatment, is shrinking. To optimize women’s health and know how best to optimize facilitators and minimize barriers to access and adherence, especially in this shrinking funding context, we need to understand more about these issues from women’s own perspectives. In response, we conducted a three-phase review: (1) a literature review (phase one); (2) focus group discussions and interviews with nearly 200 women living with HIV from 17 countries (phase two); and (3) three country case studies (phase three). The results presented here are based predominantly on women’s own experiences and are coherent across all three phases. Recommendations are proposed regarding laws, policies, and programs which are rights-based, gendered, and embrace diversity, to maximize women’s voluntary, informed, confidential, and safe access to and adherence to medication, and optimize their long-term sexual and reproductive health. PMID:29302173

  1. Research on optimal path planning algorithm of task-oriented optical remote sensing satellites

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Xu, Shengli; Liu, Fengjing; Yuan, Jingpeng

    2015-08-01

    GEO task-oriented optical remote sensing satellite, is very suitable for long-term continuous monitoring and quick access to imaging. With the development of high resolution optical payload technology and satellite attitude control technology, GEO optical remote sensing satellites will become an important developing trend for aerospace remote sensing satellite in the near future. In the paper, we focused on GEO optical remote sensing satellite plane array stare imaging characteristics and real-time leading mission of earth observation mode, targeted on satisfying needs of the user with the minimum cost of maneuver, and put forward the optimal path planning algorithm centered on transformation from geographic coordinate space to Field of plane, and finally reduced the burden of the control system. In this algorithm, bounded irregular closed area on the ground would be transformed based on coordinate transformation relations in to the reference plane for field of the satellite payload, and then using the branch and bound method to search for feasible solutions, cutting off the non-feasible solution in the solution space based on pruning strategy; and finally trimming some suboptimal feasible solutions based on the optimization index until a feasible solution for the global optimum. Simulation and visualization presentation software testing results verified the feasibility and effectiveness of the strategy.

  2. PSO Algorithm Particle Filters for Improving the Performance of Lane Detection and Tracking Systems in Difficult Roads

    PubMed Central

    Cheng, Wen-Chang

    2012-01-01

    In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453

  3. Assessment of economically optimal water management and geospatial potential for large-scale water storage

    NASA Astrophysics Data System (ADS)

    Weerasinghe, Harshi; Schneider, Uwe A.

    2010-05-01

    Assessment of economically optimal water management and geospatial potential for large-scale water storage Weerasinghe, Harshi; Schneider, Uwe A Water is an essential but limited and vulnerable resource for all socio-economic development and for maintaining healthy ecosystems. Water scarcity accelerated due to population expansion, improved living standards, and rapid growth in economic activities, has profound environmental and social implications. These include severe environmental degradation, declining groundwater levels, and increasing problems of water conflicts. Water scarcity is predicted to be one of the key factors limiting development in the 21st century. Climate scientists have projected spatial and temporal changes in precipitation and changes in the probability of intense floods and droughts in the future. As scarcity of accessible and usable water increases, demand for efficient water management and adaptation strategies increases as well. Addressing water scarcity requires an intersectoral and multidisciplinary approach in managing water resources. This would in return safeguard the social welfare and the economical benefit to be at their optimal balance without compromising the sustainability of ecosystems. This paper presents a geographically explicit method to assess the potential for water storage with reservoirs and a dynamic model that identifies the dimensions and material requirements under an economically optimal water management plan. The methodology is applied to the Elbe and Nile river basins. Input data for geospatial analysis at watershed level are taken from global data repositories and include data on elevation, rainfall, soil texture, soil depth, drainage, land use and land cover; which are then downscaled to 1km spatial resolution. Runoff potential for different combinations of land use and hydraulic soil groups and for mean annual precipitation levels are derived by the SCS-CN method. Using the overlay and decision tree algorithms in GIS, potential water storage sites are identified for constructing regional reservoirs. Subsequently, sites are prioritized based on runoff generation potential (m3 per unit area), and geographical suitability for constructing storage structures. The results from the spatial analysis are used as input for the optimization model. Allocation of resources and appropriate dimension for dams and associated structures are identified using the optimization model. The model evaluates the capability of alternative reservoirs for cost-efficient water management. The Geographic Information System is used to store, analyze, and integrate spatially explicit and non-spatial attribute information whereas the algebraic modeling platform is used to develop the dynamic optimization model. The results of this methodology are validated over space against satellite remote sensing data and existing data on reservoir capacities and runoff. The method is suitable for application of on-farm water storage structures, water distribution networks, and moisture conservation structures in a global context.

  4. Detrending the realized volatility in the global FX market

    NASA Astrophysics Data System (ADS)

    Schmidt, Anatoly B.

    2009-05-01

    There has been growing interest in realized volatility (RV) of financial assets that is calculated using intra-day returns. The choice of optimal time grid for these calculations is not trivial and generally requires analysis of RV dependence on the grid spacing (so-called RV signature). Typical RV signatures have a maximum at the finest time grid spacing available, which is attributed to the microstructure effects. This maximum decays into a plateau at lower frequencies, which implies (almost) stationary return variance. We found that the RV signatures in the modern global FX market may have no plateau or even have a maximum at lower frequencies. Simple averaging methods used to address the microstructure effects in equities have no practical effect on the FX RV signatures. We show that local detrending of the high-frequency FX rate samples yields RV signatures with a pronounced plateau. This implies that FX rates can be described with a Brownian motion having non-stationary trend and stationary variance. We point at a role of algorithmic trading as a possible cause of micro-trends in FX rates.

  5. Probabilistic determination of probe locations from distance data

    PubMed Central

    Xu, Xiao-Ping; Slaughter, Brian D.; Volkmann, Niels

    2013-01-01

    Distance constraints, in principle, can be employed to determine information about the location of probes within a three-dimensional volume. Traditional methods for locating probes from distance constraints involve optimization of scoring functions that measure how well the probe location fits the distance data, exploring only a small subset of the scoring function landscape in the process. These methods are not guaranteed to find the global optimum and provide no means to relate the identified optimum to all other optima in scoring space. Here, we introduce a method for the location of probes from distance information that is based on probability calculus. This method allows exploration of the entire scoring space by directly combining probability functions representing the distance data and information about attachment sites. The approach is guaranteed to identify the global optimum and enables the derivation of confidence intervals for the probe location as well as statistical quantification of ambiguities. We apply the method to determine the location of a fluorescence probe using distances derived by FRET and show that the resulting location matches that independently derived by electron microscopy. PMID:23770585

  6. Crystal structure, phytochemical study and enzyme inhibition activity of Ajaconine and Delectinine

    NASA Astrophysics Data System (ADS)

    Ahmad, Shujaat; Ahmad, Hanif; Khan, Hidayat Ullah; Shahzad, Adnan; Khan, Ezzat; Ali Shah, Syed Adnan; Ali, Mumtaz; Wadud, Abdul; Ghufran, Mehreen; Naz, Humera; Ahmad, Manzoor

    2016-11-01

    The Crystal structure, comparative DFT study and phytochemical investigation of atisine type C-20 diterpenoid alkaloid ajaconine (1) and lycoctonine type C-19 diterpenoid alkaloid delectinine (2) is reported here. These compounds were isolated from Delphinium chitralense. Both the natural products 1 and 2 crystallize in orthorhombic crystal system with identical space group of P212121. The geometric parameters of both compounds were calculated with the help of DFT using B3LYP/6-31+G (p) basis set and HOMO-LUMO energies, optimized band gaps, global hardness, ionization potential, electron affinity and global electrophilicity are calculated. The compounds 1 and 2 were screened for acetyl cholinesterase and butyryl cholinesterase inhibition activities in a dose dependent manner followed by molecular docking to explore the possible inhibitory mechanism of ajaconine (1) and delectinine (2). The IC50 values of tested compounds against AChE were observed as 12.61 μM (compound 1) and 5.04 μM (compound 2). The same experiments were performed for inhibition of BChE and IC50 was observed to be 10.18 μM (1) and 9.21 μM (2). Promising inhibition activity was shown by both the compounds against AChE and BChE in comparison with standard drugs available in the market such as allanzanthane and galanthamine. The inhibition efficiency of both the natural products was determined in a dose dependent manner.

  7. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Levis, C. A.; Mount-Campbell, C.; Gonsalvez, D. J.; Wang, C. W.; Yamamura, Y.

    1985-01-01

    Computer-based techniques for optimizing communications-satellite orbit and frequency assignments are discussed. A gradient-search code was tested against a BSS scenario derived from the RARC-83 data. Improvement was obtained, but each iteration requires about 50 minutes of IBM-3081 CPU time. Gradient-search experiments on a small FSS test problem, consisting of a single service area served by 8 satellites, showed quickest convergence when the satellites were all initially placed near the center of the available orbital arc with moderate spacing. A transformation technique is proposed for investigating the surface topography of the objective function used in the gradient-search method. A new synthesis approach is based on transforming single-entry interference constraints into corresponding constraints on satellite spacings. These constraints are used with linear objective functions to formulate the co-channel orbital assignment task as a linear-programming (LP) problem or mixed integer programming (MIP) problem. Globally optimal solutions are always found with the MIP problems, but not necessarily with the LP problems. The MIP solutions can be used to evaluate the quality of the LP solutions. The initial results are very encouraging.

  8. Dynamic equilibrium strategy for drought emergency temporary water transfer and allocation management

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Ma, Ning; Lv, Chengwei

    2016-08-01

    Efficient water transfer and allocation are critical for disaster mitigation in drought emergencies. This is especially important when the different interests of the multiple decision makers and the fluctuating water resource supply and demand simultaneously cause space and time conflicts. To achieve more effective and efficient water transfers and allocations, this paper proposes a novel optimization method with an integrated bi-level structure and a dynamic strategy, in which the bi-level structure works to deal with space dimension conflicts in drought emergencies, and the dynamic strategy is used to deal with time dimension conflicts. Combining these two optimization methods, however, makes calculation complex, so an integrated interactive fuzzy program and a PSO-POA are combined to develop a hybrid-heuristic algorithm. The successful application of the proposed model in a real world case region demonstrates its practicality and efficiency. Dynamic cooperation between multiple reservoirs under the coordination of a global regulator reflects the model's efficiency and effectiveness in drought emergency water transfer and allocation, especially in a fluctuating environment. On this basis, some corresponding management recommendations are proposed to improve practical operations.

  9. Optimum design of structures subject to general periodic loads

    NASA Technical Reports Server (NTRS)

    Reiss, Robert; Qian, B.

    1989-01-01

    A simplified version of Icerman's problem regarding the design of structures subject to a single harmonic load is discussed. The nature of the restrictive conditions that must be placed on the design space in order to ensure an analytic optimum are discussed in detail. Icerman's problem is then extended to include multiple forcing functions with different driving frequencies. And the conditions that now must be placed upon the design space to ensure an analytic optimum are again discussed. An important finding is that all solutions to the optimality condition (analytic stationary design) are local optima, but the global optimum may well be non-analytic. The more general problem of distributing the fixed mass of a linear elastic structure subject to general periodic loads in order to minimize some measure of the steady state deflection is also considered. This response is explicitly expressed in terms of Green's functional and the abstract operators defining the structure. The optimality criterion is derived by differentiating the response with respect to the design parameters. The theory is applicable to finite element as well as distributed parameter models.

  10. An Improved Hybrid Encoding Cuckoo Search Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Feng, Yanhong; Jia, Ke; He, Yichao

    2014-01-01

    Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions. PMID:24527026

  11. Intelligent Space Tube Optimization for speeding ground water remedial design.

    PubMed

    Kalwij, Ineke M; Peralta, Richard C

    2008-01-01

    An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.

  12. Toward a global space exploration program: A stepping stone approach

    NASA Astrophysics Data System (ADS)

    Ehrenfreund, Pascale; McKay, Chris; Rummel, John D.; Foing, Bernard H.; Neal, Clive R.; Masson-Zwaan, Tanja; Ansdell, Megan; Peter, Nicolas; Zarnecki, John; Mackwell, Steve; Perino, Maria Antionetta; Billings, Linda; Mankins, John; Race, Margaret

    2012-01-01

    In response to the growing importance of space exploration in future planning, the Committee on Space Research (COSPAR) Panel on Exploration (PEX) was chartered to provide independent scientific advice to support the development of exploration programs and to safeguard the potential scientific assets of solar system objects. In this report, PEX elaborates a stepwise approach to achieve a new level of space cooperation that can help develop world-wide capabilities in space science and exploration and support a transition that will lead to a global space exploration program. The proposed stepping stones are intended to transcend cross-cultural barriers, leading to the development of technical interfaces and shared legal frameworks and fostering coordination and cooperation on a broad front. Input for this report was drawn from expertise provided by COSPAR Associates within the international community and via the contacts they maintain in various scientific entities. The report provides a summary and synthesis of science roadmaps and recommendations for planetary exploration produced by many national and international working groups, aiming to encourage and exploit synergies among similar programs. While science and technology represent the core and, often, the drivers for space exploration, several other disciplines and their stakeholders (Earth science, space law, and others) should be more robustly interlinked and involved than they have been to date. The report argues that a shared vision is crucial to this linkage, and to providing a direction that enables new countries and stakeholders to join and engage in the overall space exploration effort. Building a basic space technology capacity within a wider range of countries, ensuring new actors in space act responsibly, and increasing public awareness and engagement are concrete steps that can provide a broader interest in space exploration, worldwide, and build a solid basis for program sustainability. By engaging developing countries and emerging space nations in an international space exploration program, it will be possible to create a critical bottom-up support structure to support program continuity in the development and execution of future global space exploration frameworks. With a focus on stepping stones, COSPAR can support a global space exploration program that stimulates scientists in current and emerging spacefaring nations, and that will invite those in developing countries to participate—pursuing research aimed at answering outstanding questions about the origins and evolution of our solar system and life on Earth (and possibly elsewhere). COSPAR, in cooperation with national and international science foundations and space-related organizations, will advocate this stepping stone approach to enhance future cooperative space exploration efforts.

  13. Parameter estimation of a pulp digester model with derivative-free optimization strategies

    NASA Astrophysics Data System (ADS)

    Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.

    2017-07-01

    The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.

  14. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Irwin, Ryan W.; Tinker, Michael L.

    2005-01-01

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  15. Multilevel decomposition approach to integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1994-01-01

    This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.

  16. What do we mean by sensitivity analysis? The need for comprehensive characterization of "global" sensitivity in Earth and Environmental systems models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2015-05-01

    Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.

  17. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  18. Prospective, randomized and controlled trial on magnesium sulfate administration during laparoscopic gastrectomy: effects on surgical space conditions and recovery profiles.

    PubMed

    Ryu, J H; Koo, B W; Kim, B G; Oh, A Y; Kim, H H; Park, D J; Lee, C M; Kim, S T; Do, S H

    2016-11-01

    The degree of neuromuscular blockade is one of the important factors that determine the condition of surgical space during laparoscopic surgery. Magnesium sulfate potentiates the actions of neuromuscular blocking agent, and we hypothesized that intraoperative magnesium sulfate infusion may improve surgical space condition during laparoscopic surgery. Eighty-four patients undergoing elective laparoscopic gastrectomy were randomized to receive isotonic saline (group C) or magnesium sulfate (group M, loading dose with 50 mg/kg over 10 min and then 15 mg/kg/h by continuous infusion) to maintain the moderate neuromuscular blockade using rocuronium. Two experienced surgeons scored the quality of surgical space condition using a 5-point surgical rating scale (SRS). The secondary outcomes included recovery profiles, postoperative pain and adverse events. The SRS in group M was higher than that of group C. The proportion of patients with a SRS of 5 (optimal) was 2.7 % in the group C and 40.5 % in the group M (P < 0.0001) although a lower amount of rocuronium was required in group M than group C [24.2 (6.5) mg/h for group M vs. 27.5 (6) mg/h for group C; P = 0.017]. Pain after operation site was less severe in group M than in group C at postoperative 24 h (P = 0.009). Recovery profiles and adverse events were similar between the two groups. Intraoperative administration of magnesium sulfate improved the quality of surgical space conditions and decreased neuromuscular blocking agent requirement and postoperative pain in patients undergoing laparoscopic gastrectomy.

  19. Multi-physics simulations of space weather

    NASA Astrophysics Data System (ADS)

    Gombosi, Tamas; Toth, Gabor; Sokolov, Igor; de Zeeuw, Darren; van der Holst, Bart; Cohen, Ofer; Glocer, Alex; Manchester, Ward, IV; Ridley, Aaron

    Presently magnetohydrodynamic (MHD) models represent the "workhorse" technology for simulating the space environment from the solar corona to the ionosphere. While these models are very successful in describing many important phenomena, they are based on a low-order moment approximation of the phase-space distribution function. In the last decade our group at the Center for Space Environment Modeling (CSEM) has developed the Space Weather Modeling Framework (SWMF) that efficiently couples together different models describing the interacting regions of the space environment. Many of these domain models (such as the global solar corona, the inner heliosphere or the global magnetosphere) are based on MHD and are represented by our multiphysics code, BATS-R-US. BATS-R-US can solve the equations of "standard" ideal MHD, but it can also go beyond this first approximation. It can solve resistive MHD, Hall MHD, semi-relativistic MHD (that keeps the displacement current), multispecies (different ion species have different continuity equations) and multifluid (all ion species have separate continuity, momentum and energy equations) MHD. Recently we added two-fluid Hall MHD (solving the electron and ion energy equations separately) and are working on extended magnetohydrodynamics with anisotropic pressures. This talk will show the effects of added physics and compare space weather simulation results to "standard" ideal MHD.

  20. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  1. Workshop on Strategies for Calibration and Validation of Global Change Measurements

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce; Butler, James; Ardanuy, Philip

    1997-01-01

    The Committee on Environment and Natural Resources (CENR) Task Force on Observations and Data Management hosted a Global Change Calibration/Validation Workshop on May 10-12, 1995, in Arlington, Virginia. This Workshop was convened by Robert Schiffer of NASA Headquarters in Washington, D.C., for the CENR Secretariat with a view toward assessing and documenting lessons learned in the calibration and validation of large-scale, long-term data sets in land, ocean, and atmospheric research programs. The National Aeronautics and Space Administration (NASA)/Goddard Space Flight Center (GSFC) hosted the meeting on behalf of the Committee on Earth Observation Satellites (CEOS)/Working Group on Calibration/walidation, the Global Change Observing System (GCOS), and the U. S. CENR. A meeting of experts from the international scientific community was brought together to develop recommendations for calibration and validation of global change data sets taken from instrument series and across generations of instruments and technologies. Forty-nine scientists from nine countries participated. The U. S., Canada, United Kingdom, France, Germany, Japan, Switzerland, Russia, and Kenya were represented.

  2. Genetic characterization of stem rust resistance in a global spring wheat germplasm collection

    USDA-ARS?s Scientific Manuscript database

    Stem rust is considered one of the most damaging diseases of wheat. The recent emergence of the stem rust Ug99 race group poses a serious threat to world wheat production. Utilization of genetic resistance in cultivar development is the optimal way to control stem rust. Here we report association ma...

  3. Modeling Common Cause Failures of Thrusters on ISS Visiting Vehicles

    NASA Technical Reports Server (NTRS)

    Haught, Megan

    2014-01-01

    This paper discusses the methodology used to model common cause failures of thrusters on the International Space Station (ISS) Visiting Vehicles. The ISS Visiting Vehicles each have as many as 32 thrusters, whose redundancy makes them susceptible to common cause failures. The Global Alpha Model (as described in NUREG/CR-5485) can be used to represent the system common cause contribution, but NUREG/CR-5496 supplies global alpha parameters for groups only up to size six. Because of the large number of redundant thrusters on each vehicle, regression is used to determine parameter values for groups of size larger than six. An additional challenge is that Visiting Vehicle thruster failures must occur in specific combinations in order to fail the propulsion system; not all failure groups of a certain size are critical.

  4. Multidisciplinary Optimization and Damage Tolerance of Stiffened Structures

    NASA Astrophysics Data System (ADS)

    Jrad, Mohamed

    THE structural optimization of a cantilever aircraft wing with curvilinear spars and ribs and stiffeners is described. For the optimization of a complex wing, a common strategy is to divide the optimization procedure into two subsystems: the global wing optimization which optimizes the geometry of spars, ribs and wing skins; and the local panel optimization which optimizes the design variables of local panels bordered by spars and ribs. The stiffeners are placed on the local panels to increase the stiffness and buckling resistance. During the local panel optimization, the stress information is taken from the global model as a displacement boundary condition on the panel edges using the so-called "Global-Local Approach". Particle swarm optimization is used in the integration of global/local optimization to optimize the SpaRibs. Parallel computing approach has been developed in the Python programming language to reduce the CPU time. The license cycle-check method and memory self-adjustment method are two approaches that have been applied in the parallel framework in order to optimize the use of the resources by reducing the license and memory limitations and making the code robust. The integrated global-local optimization approach has been applied to subsonic NASA common research model (CRM) wing, which proves the methodology's application scaling with medium fidelity FEM analysis. The structural weight of the wing has been reduced by 42% and the parallel implementation allowed a reduction in the CPU time by 89%. The aforementioned Global-Local Approach is investigated and applied to a composite panel with crack at its center. Because of composite laminates' heterogeneity, an accurate analysis of these requires very high time and storage space. A possible alternative to reduce the computational complexity is the global-local analysis which involves an approximate analysis of the whole structure followed by a detailed analysis of a significantly smaller region of interest. Buckling analysis of a composite panel with attached longitudinal stiffeners under compressive loads is performed using Ritz method with trigonometric functions. Results are then compared to those from Abaqus FEA for different shell elements. The case of composite panel with one, two, and three stiffeners is investigated. The effect of the distance between the stiffeners on the buckling load is also studied. The variation of the buckling load and buckling modes with the stiffeners' height is investigated. It is shown that there is an optimum value of stiffeners' height beyond which the structural response of the stiffened panel is not improved and the buckling load does not increase. Furthermore, there exist different critical values of stiffener's height at which the buckling mode of the structure changes. Next, buckling analysis of a composite panel with two straight stiffeners and a crack at the center is performed. Finally, buckling analysis of a composite panel with curvilinear stiffeners and a crack at the center is also conducted. Results show that panels with a larger crack have a reduced buckling load and that the buckling load decreases slightly when using higher order 2D shell FEM elements. A damage tolerance framework, EBF3PanelOpt, has been developed to design and analyze curvilinearly stiffened panels. The framework is written with the scripting language Python and it interacts with the commercial software MSC. Patran (for geometry and mesh creation), MSC. Nastran (for finite element analysis), and MSC. Marc (for damage tolerance analysis). The crack location is set to the location of the maximum value of the major principal stress while its orientation is set normal to the major principal axis direction. The effective stress intensity factor is calculated using the Virtual Crack Closure Technique and compared to the fracture toughness of the material in order to decide whether the crack will expand or not. The ratio of these two quantities is used as a constraint, along with the buckling factor, Kreisselmeier and Steinhauser criteria, and crippling factor. The EBF3PanelOpt framework is integrated within a two-step Particle Swarm Optimization in order to minimize the weight of the panel while satisfying the aforementioned constraints and using all the shape and thickness parameters as design variables. The result of the PSO is used then as an initial guess for the Gradient Based Optimization using only the thickness parameters as design variables and employing VisualDOC. Stiffened panel with two curvilinear stiffeners is optimized for two load cases. In both cases, significant reduction has been made for the panel's weight.

  5. A homotopy algorithm for digital optimal projection control GASD-HADOC

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.

    1993-01-01

    The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.

  6. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  7. Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction

    NASA Astrophysics Data System (ADS)

    Leverrier, Anthony

    2017-05-01

    Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U (n ) (instead of the symmetric group Sn as in usual de Finetti theorems), and by introducing generalized S U (2 ,2 ) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

  8. Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction.

    PubMed

    Leverrier, Anthony

    2017-05-19

    Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U(n) (instead of the symmetric group S_{n} as in usual de Finetti theorems), and by introducing generalized SU(2,2) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

  9. Community Coordinated Modeling Center: A Powerful Resource in Space Science and Space Weather Education

    NASA Astrophysics Data System (ADS)

    Chulaki, A.; Kuznetsova, M. M.; Rastaetter, L.; MacNeice, P. J.; Shim, J. S.; Pulkkinen, A. A.; Taktakishvili, A.; Mays, M. L.; Mendoza, A. M. M.; Zheng, Y.; Mullinix, R.; Collado-Vega, Y. M.; Maddox, M. M.; Pembroke, A. D.; Wiegand, C.

    2015-12-01

    Community Coordinated Modeling Center (CCMC) is a NASA affiliated interagency partnership with the primary goal of aiding the transition of modern space science models into space weather forecasting while supporting space science research. Additionally, over the past ten years it has established itself as a global space science education resource supporting undergraduate and graduate education and research, and spreading space weather awareness worldwide. A unique combination of assets, capabilities and close ties to the scientific and educational communities enable this small group to serve as a hub for raising generations of young space scientists and engineers. CCMC resources are publicly available online, providing unprecedented global access to the largest collection of modern space science models (developed by the international research community). CCMC has revolutionized the way simulations are utilized in classrooms settings, student projects, and scientific labs and serves hundreds of educators, students and researchers every year. Another major CCMC asset is an expert space weather prototyping team primarily serving NASA's interplanetary space weather needs. Capitalizing on its unrivaled capabilities and experiences, the team provides in-depth space weather training to students and professionals worldwide, and offers an amazing opportunity for undergraduates to engage in real-time space weather monitoring, analysis, forecasting and research. In-house development of state-of-the-art space weather tools and applications provides exciting opportunities to students majoring in computer science and computer engineering fields to intern with the software engineers at the CCMC while also learning about the space weather from the NASA scientists.

  10. The Caspian Sea regionalism in a globalized world: Energy security and regional trajectories of Azerbaijan and Iran

    NASA Astrophysics Data System (ADS)

    Hedjazi, Babak

    2007-12-01

    This dissertation is fundamentally about the formation of new regional spaces in Central Eurasia viewed from a dynamic, comparative and historical approach. Analyzing the global-local economic and political interactions and their consequences on resource rich countries of the Caspian Sea enable us to reframe security as a central element of the new global order. In this respect, the dissertation examines how two particular states, Azerbaijan and Iran, respond to the changing global security environment and optimize their capacity to absorb or control change. Here, security as I conceive is multidimensional and engages various social, political and economic domains. My research is articulated along three hypotheses regarding the formation of a new regional space and its consequences on territorial polarization and interstate rivalry. These hypotheses, respectively and cumulatively, elucidate global and domestic contexts of regional space formation, regional strategic and discursive trajectories, and regional tensions of global/local interactions. In order to empirically test these hypotheses, a series of thirty interviews were conducted by the author with local and foreign business representatives, civilian and government representatives, and corroborated by economic data collected from the International Energy Agency. The findings of the research validate the primary assumption of the dissertation that Azerbaijan and Iran have chosen the regional scale to address discrepancies between their aspired place in the new world order and the reality of their power and international status. Extending the argument for structural scarcity of oil towards contenders, this dissertation concludes that the Caspian oil has become a fundamental element of the regional discourse. The mismatch between the rhetoric of sovereign rights and energy security on one side and the reality of regional countries' powerlessness and their need to reach international markets on the other side are fundamental focal points of divergent regional trajectories of Azerbaijan and Iran. Divergent readings of energy security and its provision by Azerbaijan and Iran on the one hand, and how energy security is interpreted and incorporated in institutionalized regulation and new regimes of governance by consumer countries on the other hand, shape the new configuration of the Caspian Sea regionalism.

  11. Open space preservation, property value, and optimal spatial configuration

    Treesearch

    Yong Jiang; Stephen K. Swallow

    2007-01-01

    The public has increasingly demonstrated a strong support for open space preservation. How to finance the socially efficient level of open space with the optimal spatial structure is of high policy relevance to local governments. In this study, we developed a spatially explicit open space model to help identify the socially optimal amount and optimal spatial...

  12. A quasi-global precipitation time series for drought monitoring

    USGS Publications Warehouse

    Funk, Chris C.; Peterson, Pete J.; Landsfeld, Martin F.; Pedreros, Diego H.; Verdin, James P.; Rowland, James D.; Romero, Bo E.; Husak, Gregory J.; Michaelsen, Joel C.; Verdin, Andrew P.

    2014-01-01

    Estimating precipitation variations in space and time is an important aspect of drought early warning and environmental monitoring. An evolving drier-than-normal season must be placed in historical context so that the severity of rainfall deficits may quickly be evaluated. To this end, scientists at the U.S. Geological Survey Earth Resources Observation and Science Center, working closely with collaborators at the University of California, Santa Barbara Climate Hazards Group, have developed a quasi-global (50°S–50°N, 180°E–180°W), 0.05° resolution, 1981 to near-present gridded precipitation time series: the Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) data archive.

  13. Adaptive Correlation Space Adjusted Open-Loop Tracking Approach for Vehicle Positioning with Global Navigation Satellite System in Urban Areas

    PubMed Central

    Ruan, Hang; Li, Jian; Zhang, Lei; Long, Teng

    2015-01-01

    For vehicle positioning with Global Navigation Satellite System (GNSS) in urban areas, open-loop tracking shows better performance because of its high sensitivity and superior robustness against multipath. However, no previous study has focused on the effects of the code search grid size on the code phase measurement accuracy of open-loop tracking. Traditional open-loop tracking methods are performed by the batch correlators with fixed correlation space. The code search grid size, which is the correlation space, is a constant empirical value and the code phase measuring accuracy will be largely degraded due to the improper grid size, especially when the signal carrier-to-noise density ratio (C/N0) varies. In this study, the Adaptive Correlation Space Adjusted Open-Loop Tracking Approach (ACSA-OLTA) is proposed to improve the code phase measurement dependent pseudo range accuracy. In ACSA-OLTA, the correlation space is adjusted according to the signal C/N0. The novel Equivalent Weighted Pseudo Range Error (EWPRE) is raised to obtain the optimal code search grid sizes for different C/N0. The code phase measuring errors of different measurement calculation methods are analyzed for the first time. The measurement calculation strategy of ACSA-OLTA is derived from the analysis to further improve the accuracy but reduce the correlator consumption. Performance simulation and real tests confirm that the pseudo range and positioning accuracy of ASCA-OLTA are better than the traditional open-loop tracking methods in the usual scenarios of urban area. PMID:26343683

  14. Automation of POST Cases via External Optimizer and "Artificial p2" Calculation

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Mathew R.

    2017-01-01

    During early conceptual design of complex systems, speed and accuracy are often at odds with one another. While many characteristics of the design are fluctuating rapidly during this phase there is nonetheless a need to acquire accurate data from which to down-select designs as these decisions will have a large impact upon program life-cycle cost. Therefore enabling the conceptual designer to produce accurate data in a timely manner is tantamount to program viability. For conceptual design of launch vehicles, trajectory analysis and optimization is a large hurdle. Tools such as the industry standard Program to Optimize Simulated Trajectories (POST) have traditionally required an expert in the loop for setting up inputs, running the program, and analyzing the output. The solution space for trajectory analysis is in general non-linear and multi-modal requiring an experienced analyst to weed out sub-optimal designs in pursuit of the global optimum. While an experienced analyst presented with a vehicle similar to one which they have already worked on can likely produce optimal performance figures in a timely manner, as soon as the "experienced" or "similar" adjectives are invalid the process can become lengthy. In addition, an experienced analyst working on a similar vehicle may go into the analysis with preconceived ideas about what the vehicle's trajectory should look like which can result in sub-optimal performance being recorded. Thus, in any case but the ideal either time or accuracy can be sacrificed. In the authors' previous work a tool called multiPOST was created which captures the heuristics of a human analyst over the process of executing trajectory analysis with POST. However without the instincts of a human in the loop, this method relied upon Monte Carlo simulation to find successful trajectories. Overall the method has mixed results, and in the context of optimizing multiple vehicles it is inefficient in comparison to the method presented POST's internal optimizer functions like any other gradient-based optimizer. It has a specified variable to optimize whose value is represented as optval, a set of dependent constraints to meet with associated forms and tolerances whose value is represented as p2, and a set of independent variables known as the u-vector to modify in pursuit of optimality. Each of these quantities are calculated or manipulated at a certain phase within the trajectory. The optimizer is further constrained by the requirement that the input u-vector must result in a trajectory which proceeds through each of the prescribed events in the input file. For example, if the input u-vector causes the vehicle to crash before it can achieve the orbital parameters required for a parking orbit, then the run will fail without engaging the optimizer, and a p2 value of exactly zero is returned. This poses a problem, as this "non-connecting" region of the u-vector space is far larger than the "connecting" region which returns a non-zero value of p2 and can be worked on by the internal optimizer. Finding this connecting region and more specifically the global optimum within this region has traditionally required the use of an expert analyst.

  15. Fluorescence-based classification of Caribbean coral reef organisms and substrates

    USGS Publications Warehouse

    Zawada, David G.; Mazel, Charles H.

    2014-01-01

    A diverse group of coral reef organisms, representing several phyla, possess fluorescent pigments. We investigated the potential of using the characteristic fluorescence emission spectra of these pigments to enable unsupervised, optical classification of coral reef habitats. We compiled a library of characteristic fluorescence spectra through in situ and laboratory measurements from a variety of specimens throughout the Caribbean. Because fluorescent pigments are not species-specific, the spectral library is organized in terms of 15 functional groups. We investigated the spectral separability of the functional groups in terms of the number of wavebands required to distinguish between them, using the similarity measures Spectral Angle Mapper (SAM), Spectral Information Divergence (SID), SID-SAM mixed measure, and Mahalanobis distance. This set of measures represents geometric, stochastic, joint geometric-stochastic, and statistical approaches to classifying spectra. Our hyperspectral fluorescence data were used to generate sets of 4-, 6-, and 8-waveband spectra, including random variations in relative signal amplitude, spectral peak shifts, and water-column attenuation. Each set consisted of 2 different band definitions: ‘optimally-picked’ and ‘evenly-spaced.’ The optimally-picked wavebands were chosen to coincide with as many peaks as possible in the functional group spectra. Reference libraries were formed from half of the spectra in each set and used for training purposes. Average classification accuracies ranged from 76.3% for SAM with 4 evenly-spaced wavebands to 93.8% for Mahalanobis distance with 8 evenly-spaced wavebands. The Mahalanobis distance consistently outperformed the other measures. In a second test, empirically-measured spectra were classified using the same reference libraries and the Mahalanobis distance for just the 8 evenly-spaced waveband case. Average classification accuracies were 84% and 87%, corresponding to the extremes in modeled water-column attenuation. The classification results from both tests indicate that a high degree of separability among the 15 fluorescent-spectra functional groups is possible using only a modest number of spectral bands.

  16. Optimizing molecular properties using a relative index of thermodynamic stability and global optimization techniques

    NASA Astrophysics Data System (ADS)

    Fournier, René; Mohareb, Amir

    2016-01-01

    We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of Cu m Snn + (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each Cu m S nn + species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV.

  17. Optimization of wireless sensor networks based on chicken swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qingxi; Zhu, Lihua

    2017-05-01

    In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.

  18. The Large Space Structures Technology Program

    DTIC Science & Technology

    1992-04-01

    Organization and Plan--The LSSTP was initiated in July 1985. It was conceived by Jerome Pearson, who, as leader of the Vibration Group, was responsible... Jerome Pearson was named project manager and Terry Hertz from the Analysis and Optimization Branch was his deputy. The technical disciplines and the...continued until the end of 1990. The LSSTP was originally managed by Jerome Pearson, in addition to his responsibilities as Vibration Group leader. Terry

  19. First-Principles Prediction of Thermodynamically Stable Two-Dimensional Electrides

    DOE PAGES

    Ming, Wenmei; Yoon, Mina; Univ. of Tennessee, Knoxville, TN; ...

    2016-10-21

    Two-dimensional (2D) electrides, emerging as a new type of layered material whose electrons are confined in interlayer spaces instead of at atomic proximities, are receiving interest for their high performance in various (opto)electronics and catalytic applications. Experimentally, however, 2D electrides have been only found in a couple of layered nitrides and carbides. We report new thermodynamically stable alkaline-earth based 2D electrides by using a first-principles global structure optimization method, phonon spectrum analysis, and molecular dynamics simulation. The method was applied to binary compounds consisting of alkaline-earth elements as cations and group VA, VIA, or VIIA nonmetal elements as anions. Wemore » also revealed that the stability of a layered 2D electride structure is closely related to the cation/anion size ratio; stable 2D electrides possess a sufficiently large cation/anion size ratio to minimize electrostatic energy among cations, anions, and anionic electrons. This work demonstrates a new avenue to the discovery of thermodynamically stable 2D electrides beyond experimental material databases and provides new insight into the principles of electride design.« less

  20. Space Reclamation for Uncoordinated Checkpointing in Message-Passing Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min

    1993-01-01

    Checkpointing and rollback recovery are techniques that can provide efficient recovery from transient process failures. In a message-passing system, the rollback of a message sender may cause the rollback of the corresponding receiver, and the system needs to roll back to a consistent set of checkpoints called recovery line. If the processes are allowed to take uncoordinated checkpoints, the above rollback propagation may result in the domino effect which prevents recovery line progression. Traditionally, only obsolete checkpoints before the global recovery line can be discarded, and the necessary and sufficient condition for identifying all garbage checkpoints has remained an open problem. A necessary and sufficient condition for achieving optimal garbage collection is derived and it is proved that the number of useful checkpoints is bounded by N(N+1)/2, where N is the number of processes. The approach is based on the maximum-sized antichain model of consistent global checkpoints and the technique of recovery line transformation and decomposition. It is also shown that, for systems requiring message logging to record in-transit messages, the same approach can be used to achieve optimal message log reclamation. As a final topic, a unifying framework is described by considering checkpoint coordination and exploiting piecewise determinism as mechanisms for bounding rollback propagation, and the applicability of the optimal garbage collection algorithm to domino-free recovery protocols is demonstrated.

  1. An ITK framework for deterministic global optimization for medical image registration

    NASA Astrophysics Data System (ADS)

    Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.

    2006-03-01

    Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.

  2. Identical phase oscillators with global sinusoidal coupling evolve by Mobius group action.

    PubMed

    Marvel, Seth A; Mirollo, Renato E; Strogatz, Steven H

    2009-12-01

    Systems of N identical phase oscillators with global sinusoidal coupling are known to display low-dimensional dynamics. Although this phenomenon was first observed about 20 years ago, its underlying cause has remained a puzzle. Here we expose the structure working behind the scenes of these systems by proving that the governing equations are generated by the action of the Mobius group, a three-parameter subgroup of fractional linear transformations that map the unit disk to itself. When there are no auxiliary state variables, the group action partitions the N-dimensional state space into three-dimensional invariant manifolds (the group orbits). The N-3 constants of motion associated with this foliation are the N-3 functionally independent cross ratios of the oscillator phases. No further reduction is possible, in general; numerical experiments on models of Josephson junction arrays suggest that the invariant manifolds often contain three-dimensional regions of neutrally stable chaos.

  3. Satellite-based PM concentrations and their application to COPD in Cleveland, OH

    PubMed Central

    Kumar, Naresh; Liang, Dong; Comellas, Alejandro; Chu, Allen D.; Abrams, Thad

    2014-01-01

    A hybrid approach is proposed to estimate exposure to fine particulate matter (PM2.5) at a given location and time. This approach builds on satellite-based aerosol optical depth (AOD), air pollution data from sparsely distributed Environmental Protection Agency (EPA) sites and local time–space Kriging, an optimal interpolation technique. Given the daily global coverage of AOD data, we can develop daily estimate of air quality at any given location and time. This can assure unprecedented spatial coverage, needed for air quality surveillance and management and epidemiological studies. In this paper, we developed an empirical relationship between the 2 km AOD and PM2.5 data from EPA sites. Extrapolating this relationship to the study domain resulted in 2.3 million predictions of PM2.5 between 2000 and 2009 in Cleveland Metropolitan Statistical Area (MSA). We have developed local time–space Kriging to compute exposure at a given location and time using the predicted PM2.5. Daily estimates of PM2.5 were developed for Cleveland MSA between 2000 and 2009 at 2.5 km spatial resolution; 1.7 million (~79.8%) of 2.13 million predictions required for multiyear and geographic domain were robust. In the epidemiological application of the hybrid approach, admissions for an acute exacerbation of chronic obstructive pulmonary disease (AECOPD) was examined with respect to time–space lagged PM2.5 exposure. Our analysis suggests that the risk of AECOPD increases 2.3% with a unit increase in PM2.5 exposure within 9 days and 0.05° (~5 km) distance lags. In the aggregated analysis, the exposed groups (who experienced exposure to PM2.5 >15.4 μg/m3) were 54% more likely to be admitted for AECOPD than the reference group. The hybrid approach offers greater spatiotemporal coverage and reliable characterization of ambient concentration than conventional in situ monitoring-based approaches. Thus, this approach can potentially reduce exposure misclassification errors in the conventional air pollution epidemiology studies. PMID:24045428

  4. Reasoning from non-stationarity

    NASA Astrophysics Data System (ADS)

    Struzik, Zbigniew R.; van Wijngaarden, Willem J.; Castelo, Robert

    2002-11-01

    Complex real-world (biological) systems often exhibit intrinsically non-stationary behaviour of their temporal characteristics. We discuss local measures of scaling which can capture and reveal changes in a system's behaviour. Such measures offer increased insight into a system's behaviour and are superior to global, spectral characteristics like the multifractal spectrum. They are, however, often inadequate for fully understanding and modelling the phenomenon. We illustrate an attempt to capture complex model characteristics by analysing (multiple order) correlations in a high dimensional space of parameters of the (biological) system being studied. Both temporal information, among others local scaling information, and external descriptors/parameters, possibly influencing the system's state, are used to span the search space investigated for the presence of a (sub-)optimal model. As an example, we use fetal heartbeat monitored during labour.

  5. LP instrument for "Obstanovka" experiment: use of wireless communication in complex space-borne experiments

    NASA Astrophysics Data System (ADS)

    Kirov, Boian; Batchvarov, Ditchko; Krasteva, Rumiana; Boneva, Ani; Nedkov, Rumen; Klimov, Stanislav; Stainov, Gencho

    The advance of the new wireless communications provides additional opportunities for spaceborne experiments. It is now possible to have one basic instrument collecting information from several sensors without burdensome harnessing among them. Besides, the wireless connection among various elements inside the instrument allows the hardware upgrading to be realized without changing globally the whole instrument. In complex experiments consisting of several instruments, the possibility is provided for continuous communication among the instruments, and for optimal choice of the appropriate mode of operation by the central processor. In the present paper, the LP instrument (electrostatic Langmuir probe) is described - an element of "Obstanovka" experiment designed to operate aboard the International Space Station, emphasizing on the use of wireless communication between the sensors and the main instrument.

  6. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    PubMed

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  8. Volumetrically-Derived Global Navigation Satellite System Performance Assessment from the Earths Surface through the Terrestrial Service Volume and the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user from the Earth's surface through the Terrestrial Service Volume (TSV) to the edge of the Space Service Volume (SSV), when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The first phase of that increasing complexity and fidelity analysis initiative was recently expanded to compare nadir-facing and zenith-facing user hemispherical antenna coverage with omnidirectional antenna coverage at different distances of 8,000 km altitude and 36,000 km altitude. This report summarizes the performance using these antenna coverage techniques at distances ranging from 100 km altitude to 36,000 km to be all encompassing, as well as the volumetrically-derived system availability metrics.

  9. Constraining the physical state by symmetries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fatibene, L., E-mail: lorenzo.fatibene@unito.it; INFN - Sezione Torino - IS QGSKY; Ferraris, M.

    After reviewing the hole argument and its relations with initial value problem and general covariance, we shall discuss how much freedom one has to define the physical state in a generally covariant field theory (with or without internal gauge symmetries). Our analysis relies on Cauchy problems, thus it is restricted to globally hyperbolic spacetimes. We shall show that in generally covariant theories on a compact space (as well as for internal gauge symmetries on any spacetime) one has no freedom and one is forced to declare as physically equivalent two configurations which differ by a global spacetime diffeomorphism (or bymore » an internal gauge transformation) as it is usually prescribed. On the contrary, when space is not compact, the result does not hold true and one may have different options to define physically equivalent configurations, still preserving determinism. - Highlights: • Investigate the relation between the hole argument, covariance, determinism and physical state. • Show that if space is compact then any diffeomorphism is a gauge symmetry. • Show that if space is not compact then there may be more freedom in choosing gauge group.« less

  10. Study on UKF based federal integrated navigation for high dynamic aviation

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Shao, Wei; Chen, Kai; Yan, Jie

    2011-08-01

    High dynamic aircraft is a very attractive new generation vehicles, in which provides near space aviation with large flight envelope both speed and altitude, for example the hypersonic vehicles. The complex flight environments for high dynamic vehicles require high accuracy and stability navigation scheme. Since the conventional Strapdown Inertial Navigation System (SINS) and Global Position System (GPS) federal integrated scheme based on EKF (Extended Kalman Filter) is invalidation in GPS single blackout situation because of high speed flight, a new high precision and stability integrated navigation approach is presented in this paper, in which the SINS, GPS and Celestial Navigation System (CNS) is combined as a federal information fusion configuration based on nonlinear Unscented Kalman Filter (UKF) algorithm. Firstly, the new integrated system state error is modeled. According to this error model, the SINS system is used as the navigation solution mathematic platform. The SINS combine with GPS constitute one error estimation filter subsystem based on UKF to obtain local optimal estimation, and the SINS combine with CNS constitute another error estimation subsystem. A non-reset federated configuration filter based on partial information is proposed to fuse two local optimal estimations to get global optimal error estimation, and the global optimal estimation is used to correct the SINS navigation solution. The χ 2 fault detection method is used to detect the subsystem fault, and the fault subsystem is isolation through fault interval to protect system away from the divergence. The integrated system takes advantages of SINS, GPS and CNS to an immense improvement for high accuracy and reliably high dynamic navigation application. Simulation result shows that federated fusion of using GPS and CNS to revise SINS solution is reasonable and availably with good estimation performance, which are satisfied with the demands of high dynamic flight navigation. The UKF is superior than EKF based integrated scheme, in which has smaller estimation error and quickly convergence rate.

  11. Image-based optimization of coronal magnetic field models for improved space weather forecasting

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.; MacNeice, P. J.

    2017-12-01

    The existing space weather forecasting frameworks show a significant dependence on the accuracy of the photospheric magnetograms and the extrapolation models used to reconstruct the magnetic filed in the solar corona. Minor uncertainties in the magnetic field magnitude and direction near the Sun, when propagated through the heliosphere, can lead to unacceptible prediction errors at 1 AU. We argue that ground based and satellite coronagraph images can provide valid geometric constraints that could be used for improving coronal magnetic field extrapolation results, enabling more reliable forecasts of extreme space weather events such as major CMEs. In contrast to the previously developed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions up to 1-2 solar radii above the photosphere. By applying the developed image processing techniques to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code developed S. Jones at al. (ApJ 2016, 2017). Our tracing results are shown to be in a good qualitative agreement with the large-scale configuration of the optical corona, and lead to a more consistent reconstruction of the large-scale coronal magnetic field geometry, and potentially more accurate global heliospheric simulation results. Several upcoming data products for the space weather forecasting community will be also discussed.

  12. Cooperative global optimal preview tracking control of linear multi-agent systems: an internal model approach

    NASA Astrophysics Data System (ADS)

    Lu, Yanrong; Liao, Fucheng; Deng, Jiamei; Liu, Huiyang

    2017-09-01

    This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation.

  13. Dispositional optimism and sleep quality: a test of mediating pathways

    PubMed Central

    Cribbet, Matthew; Kent de Grey, Robert G.; Cronan, Sierra; Trettevik, Ryan; Smith, Timothy W.

    2016-01-01

    Dispositional optimism has been related to beneficial influences on physical health outcomes. However, its links to global sleep quality and the psychological mediators responsible for such associations are less studied. This study thus examined if trait optimism predicted global sleep quality, and if measures of subjective well-being were statistical mediators of such links. A community sample of 175 participants (93 men, 82 women) completed measures of trait optimism, depression, and life satisfaction. Global sleep quality was assessed using the Pittsburgh Sleep Quality Index. Results indicated that trait optimism was a strong predictor of better PSQI global sleep quality. Moreover, this association was mediated by depression and life satisfaction in both single and multiple mediator models. These results highlight the importance of optimism for the restorative process of sleep, as well as the utility of multiple mediator models in testing distinct psychological pathways. PMID:27592128

  14. Dispositional optimism and sleep quality: a test of mediating pathways.

    PubMed

    Uchino, Bert N; Cribbet, Matthew; de Grey, Robert G Kent; Cronan, Sierra; Trettevik, Ryan; Smith, Timothy W

    2017-04-01

    Dispositional optimism has been related to beneficial influences on physical health outcomes. However, its links to global sleep quality and the psychological mediators responsible for such associations are less studied. This study thus examined if trait optimism predicted global sleep quality, and if measures of subjective well-being were statistical mediators of such links. A community sample of 175 participants (93 men, 82 women) completed measures of trait optimism, depression, and life satisfaction. Global sleep quality was assessed using the Pittsburgh Sleep Quality Index. Results indicated that trait optimism was a strong predictor of better PSQI global sleep quality. Moreover, this association was mediated by depression and life satisfaction in both single and multiple mediator models. These results highlight the importance of optimism for the restorative process of sleep, as well as the utility of multiple mediator models in testing distinct psychological pathways.

  15. Synchronization of an ensemble of oscillators regulated by their spatial movement.

    PubMed

    Sarkar, Sumantra; Parmananda, P

    2010-12-01

    Synchronization for a collection of oscillators residing in a finite two dimensional plane is explored. The coupling between any two oscillators in this array is unidirectional, viz., master-slave configuration. Initially the oscillators are distributed randomly in space and their autonomous time-periods follow a Gaussian distribution. The duty cycles of these oscillators, which work under an on-off scenario, are normally distributed as well. It is realized that random hopping of oscillators is a necessary condition for observing global synchronization in this ensemble of oscillators. Global synchronization in the context of the present work is defined as the state in which all the oscillators are rendered identical. Furthermore, there exists an optimal amplitude of random hopping for which the attainment of this global synchronization is the fastest. The present work is deemed to be of relevance to the synchronization phenomena exhibited by pulse coupled oscillators such as a collection of fireflies. © 2010 American Institute of Physics.

  16. Magnetic Resonance Imaging in Patients With Mechanical Low Back Pain Using a Novel Rapid-Acquisition Three-Dimensional SPACE Sequence at 1.5-T: A Pilot Study Comparing Lumbar Stenosis Assessment With Routine Two-Dimensional Magnetic Resonance Sequences.

    PubMed

    Swami, Vimarsha G; Katlariwala, Mihir; Dhillon, Sukhvinder; Jibri, Zaid; Jaremko, Jacob L

    2016-11-01

    To minimize the burden of overutilisation of lumbar spine magnetic resonance imaging (MRI) on a resource-constrained public healthcare system, it may be helpful to image some patients with mechanical low-back pain (LBP) using a simplified rapid MRI screening protocol at 1.5-T. A rapid-acquisition 3-dimensional (3D) SPACE (Sampling Perfection with Application-optimized Contrasts using different flip angle Evolution) sequence can demonstrate common etiologies of LBP. We compared lumbar spinal canal stenosis (LSCS) and neural foraminal stenosis (LNFS) assessment on 3D SPACE against conventional 2-dimensional (2D) MRI. We prospectively performed 3D SPACE and 2D spin-echo MRI sequences (axial or sagittal T1-weighted or T2-weighted) at 1.5-T in 20 patients. Two blinded readers assessed levels L3-4, L4-5 and L5-S1 using: 1) morphologic grading systems, 2) global impression on the presence or absence of clinically significant stenosis (n = 60 disc levels for LSCS, n = 120 foramina for LNFS). Reliability statistics were calculated. Acquisition time was ∼5 minutes for SPACE and ∼20 minutes for 2D MRI sequences. Interobserver agreement of LSCS was substantial to near perfect on both sequences (morphologic grading: kappa [k] = 0.71 SPACE, k = 0.69 T2-weighted; global impression: k = 0.85 SPACE, k = 0.78 T2-weighted). LNFS assessment had superior interobserver reliability using SPACE than T1-weighted (k = 0.54 vs 0.37). Intersequence agreement of findings between SPACE and 2D MRI was substantial to near perfect by global impression (LSCS: k = 0.78 Reader 1, k = 0.85 Reader 2; LNFS: k = 0.63 Reader 1, k = 0.66 Reader 2). 3D SPACE was acquired in one-quarter the time as the conventional 2D MRI protocol, had excellent agreement with 2D MRI for stenosis assessment, and had interobserver reliability superior to 2D MRI. These results justify future work to explore the role of 3D SPACE in a rapid MRI screening protocol at 1.5-T for mechanical LBP. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  17. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  18. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  19. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  20. What the Heliophysics System Observatory is teaching us about future constellations

    NASA Astrophysics Data System (ADS)

    Angelopoulos, V.

    2017-12-01

    Owing to the benign space weather during the recent solar cycle numerous Heliophysics missions have outlived their original purpose and have exceeded expectations in terms of science return. The simultaneous availability of several multi-spacecraft fleets also offers conjunction opportunities that compounds their science yield. It allows the Heliophysics System, a vast region of Sun-Earth interactions, to be peered through the colletive eyes of a fortuitous grand Observatory. The success of this Heliophysics/Geospace System Observatory (H/GSO) has been partly due to fuel resources available on THEMIS, allowing it to reconfigure its orbit lines of apsides, apogees and mean anomalies to optimize conjunctions with the rest of the H/GSO. The other part of the success has been a mandatory open data policy, the accessibility of the data though common data formats, unified analysis tools (e.g. SPEDAS) and distributed data repositories. Future constellations are motivated by the recent science lessons learned: Tight connections between dayside and nightside processes, evidenced by fortuitous conjunctions of ground and space-based assets, suggest that regional activations drive classical global modes of circulation. Like regional tornadoes and hurricanes synthesize global atmospheric weather that cannot be studied with 5 weather stations alone, one per continent, so do dayside reconnection, and nightside injections require more than a handful of point measurements. Like atmospheric weather, space weather too requires networks of stations built to meet a minimum set of requirements to "play together" and build on each other over time. Like Argo's >3000 buoys have revolutionized research, modeling and prediction by global circulation models, "space buoys" can study space weather fronts and double-up as monitors and inputs to space weather models, increasing fidelity and advance warning. Reconfigurability can allow versatility as the scientific targets adjust to the knowledge gained over the years. Classical single-satellite, multi-sensor or imaging missions can benefit from the context that constellations provide. CubeSats, a disruptive technology, are catalysts for the emergence of constellations, a new research and operations asset for Heliophysics.

  1. HI-STAR. Health Improvements through Space Technologies and Resources: Executive Summary

    NASA Technical Reports Server (NTRS)

    Finarelli, Margaret G.

    2002-01-01

    Our mission is to develop and promote a global strategy to help combat malaria using space technology. Like the tiny yet powerful mosquito, HI-STAR (Health Improvements Through Space Technologies and Resources) is a small program that aspires to make a difference. Timely detection of malaria danger zones is essential to help health authorities and policy makers make decisions about how to manage limited resources for combating malaria. In 2001, the technical support network for prevention and control of malaria epidemics published a study. HI-STAR focuses on malaria because it is the most common and deadly of the vector-borne diseases. Malaria also shares many commonalities with other diseases, which means the global strategy developed here may also be applicable to other parasitic diseases. HI-STAR would like to contribute to the many malaria groups already making great strides in the fight against malaria. Some examples include: Roll Back Malaria, The Special Program for Research and Training in Tropical Diseases (TDR) and the Multilateral Initiative on Malaria (MIM). Other important groups that are among the first to include space technologies in their model include: The Center for Health Application of Aerospace Related Technologies (CHAART) and Mapping Malaria Risk in Africa (MARA). Malaria is a complex and multi-faceted disease. Combating it must therefore be equally versatile. HI-STAR incorporates an interdisciplinary, international, intercultural approach.called 'Malaria Early Warning Systems; Concepts, Indicators and Partners.' This study, funded by Roll Back Malaria, a World Health Organization initiative, offers a framework for a monitoring and early warning system. HI-STAR seeks to build on this proposal and enhance the space elements of the suggested framework. It is the work of fifty-three professionals and students from the International Space University's 2002 Summer Session Program held in California, USA.

  2. Spatial and Spin Symmetry Breaking in Semidefinite-Programming-Based Hartree-Fock Theory.

    PubMed

    Nascimento, Daniel R; DePrince, A Eugene

    2018-05-08

    The Hartree-Fock problem was recently recast as a semidefinite optimization over the space of rank-constrained two-body reduced-density matrices (RDMs) [ Phys. Rev. A 2014 , 89 , 010502(R) ]. This formulation of the problem transfers the nonconvexity of the Hartree-Fock energy functional to the rank constraint on the two-body RDM. We consider an equivalent optimization over the space of positive semidefinite one-electron RDMs (1-RDMs) that retains the nonconvexity of the Hartree-Fock energy expression. The optimized 1-RDM satisfies ensemble N-representability conditions, and ensemble spin-state conditions may be imposed as well. The spin-state conditions place additional linear and nonlinear constraints on the 1-RDM. We apply this RDM-based approach to several molecular systems and explore its spatial (point group) and spin ( Ŝ 2 and Ŝ 3 ) symmetry breaking properties. When imposing Ŝ 2 and Ŝ 3 symmetry but relaxing point group symmetry, the procedure often locates spatial-symmetry-broken solutions that are difficult to identify using standard algorithms. For example, the RDM-based approach yields a smooth, spatial-symmetry-broken potential energy curve for the well-known Be-H 2 insertion pathway. We also demonstrate numerically that, upon relaxation of Ŝ 2 and Ŝ 3 symmetry constraints, the RDM-based approach is equivalent to real-valued generalized Hartree-Fock theory.

  3. Multiple-copy state discrimination: Thinking globally, acting locally

    NASA Astrophysics Data System (ADS)

    Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.

    2011-05-01

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.

  4. Multiple-copy state discrimination: Thinking globally, acting locally

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.

    2011-05-15

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less

  5. Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)

    2001-01-01

    Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3. The algorithms were used to optimize the computed RF efficiency of a TWT by determining the phase velocity profile of the slow-wave circuit. The mathematical theory and computational details of the DSSA algorithms will be presented and results will be compared to those obtained with a SA algorithm.

  6. Coulomb branches with complex singularities

    NASA Astrophysics Data System (ADS)

    Argyres, Philip C.; Martone, Mario

    2018-06-01

    We construct 4d superconformal field theories (SCFTs) whose Coulomb branches have singular complex structures. This implies, in particular, that their Coulomb branch coordinate rings are not freely generated. Our construction also gives examples of distinct SCFTs which have identical moduli space (Coulomb, Higgs, and mixed branch) geometries. These SCFTs thus provide an interesting arena in which to test the relationship between moduli space geometries and conformal field theory data. We construct these SCFTs by gauging certain discrete global symmetries of N = 4 superYang-Mills (sYM) theories. In the simplest cases, these discrete symmetries are outer automorphisms of the sYM gauge group, and so these theories have lagrangian descriptions as N = 4 sYM theories with disconnected gauge groups.

  7. Effect of diatomic molecular properties on binary laser pulse optimizations of quantum gate operations.

    PubMed

    Zaari, Ryan R; Brown, Alex

    2011-07-28

    The importance of the ro-vibrational state energies on the ability to produce high fidelity binary shaped laser pulses for quantum logic gates is investigated. The single frequency 2-qubit ACNOT(1) and double frequency 2-qubit NOT(2) quantum gates are used as test cases to examine this behaviour. A range of diatomics is sampled. The laser pulses are optimized using a genetic algorithm for binary (two amplitude and two phase parameter) variation on a discretized frequency spectrum. The resulting trends in the fidelities were attributed to the intrinsic molecular properties and not the choice of method: a discretized frequency spectrum with genetic algorithm optimization. This is verified by using other common laser pulse optimization methods (including iterative optimal control theory), which result in the same qualitative trends in fidelity. The results differ from other studies that used vibrational state energies only. Moreover, appropriate choice of diatomic (relative ro-vibrational state arrangement) is critical for producing high fidelity optimized quantum logic gates. It is also suggested that global phase alignment imposes a significant restriction on obtaining high fidelity regions within the parameter search space. Overall, this indicates a complexity in the ability to provide appropriate binary laser pulse control of diatomics for molecular quantum computing. © 2011 American Institute of Physics

  8. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  9. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  10. Modeling Common Cause Failures of Thrusters on ISS Visiting Vehicles

    NASA Technical Reports Server (NTRS)

    Haught, Megan; Duncan, Gary

    2014-01-01

    This paper discusses the methodology used to model common cause failures of thrusters on the International Space Station (ISS) Visiting Vehicles. The ISS Visiting Vehicles each have as many as 32 thrusters, whose redundancy and similar design make them susceptible to common cause failures. The Global Alpha Model (as described in NUREG/CR-5485) can be used to represent the system common cause contribution, but NUREG/CR-5496 supplies global alpha parameters for groups only up to size six. Because of the large number of redundant thrusters on each vehicle, regression is used to determine parameter values for groups of size larger than six. An additional challenge is that Visiting Vehicle thruster failures must occur in specific combinations in order to fail the propulsion system; not all failure groups of a certain size are critical.

  11. The NEWS Water Cycle Climatology

    NASA Astrophysics Data System (ADS)

    Rodell, M.; Beaudoing, H. K.; L'Ecuyer, T.; Olson, W. S.

    2012-12-01

    NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.

  12. The NEWS Water Cycle Climatology

    NASA Technical Reports Server (NTRS)

    Rodell, Matthew; Beaudoing, Hiroko Kato; L'Ecuyer, Tristan; William, Olson

    2012-01-01

    NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.

  13. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    PubMed

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  14. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  15. OAST Space Theme Workshop. Volume 3: Working group summary. 3: Sensors (E-3). A. Statement. B. Technology needs (form 1). C. Priority assessment (form 2). D. Additional assessment

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Developments required to support the space power, SETI, solar system exploration and global services programs are identified. Instrumentation and calibration sensors (rather than scientific) are needed for the space power system. Highly sophisticated receivers for narrowband detection of microwave sensors and sensors for automated stellar cataloging to provide a mapping data base for SETI are needed. Various phases of solar system exploration require large area solid state imaging arrays from UV to IR; a long focal plane telescope; high energy particle detectors; advanced spectrometers; a gravitometer; and atmospheric distanalyzer; sensors for penetrometers; in-situ sensors for surface chemical analysis, life detection, spectroscopic and microscopic analyses of surface soils, and for meteorological measurements. Active and passive multiapplication sensors, advanced multispectral scanners with improved resolution in the UV and IR ranges, and laser techniques for advanced probing and oceanographic characterization will enhance for global services.

  16. Technique Developed for Optimizing Traveling-Wave Tubes

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    1999-01-01

    A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT s are critical components in deep-space probes, geosynchronous communication satellites, and high-power radar systems. Power efficiency is of paramount importance for TWT s employed in deep-space probes and communications satellites. Consequently, increasing the power efficiency of TWT s has been the primary goal of the TWT group at the NASA Lewis Research Center over the last 25 years. An in-house effort produced a technique (ref. 1) to design TWT's for optimized power efficiency. This technique is based on simulated annealing, which has an advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 2). A simulated annealing algorithm was created and integrated into the NASA TWT computer model (ref. 3). The new technique almost doubled the computed conversion power efficiency of a TWT from 7.1 to 13.5 percent (ref. 1).

  17. Systems and Photosystems: Cellular Limits of Autotrophic Productivity in Cyanobacteria

    PubMed Central

    Burnap, Robert L.

    2014-01-01

    Recent advances in the modeling of microbial growth and metabolism have shown that growth rate critically depends upon the optimal allocation of finite proteomic resources among different cellular functions and that modeling growth rates becomes more realistic with the explicit accounting for the costs of macromolecular synthesis, most importantly, protein expression. The “proteomic constraint” is considered together with its application to understanding photosynthetic microbial growth. The central hypothesis is that physical limits of cellular space (and corresponding solvation capacity) in conjunction with cell surface-to-volume ratios represent the underlying constraints on the maximal rate of autotrophic microbial growth. The limitation of cellular space thus constrains the size the total complement of macromolecules, dissolved ions, and metabolites. To a first approximation, the upper limit in the cellular amount of the total proteome is bounded this space limit. This predicts that adaptation to osmotic stress will result in lower maximal growth rates due to decreased cellular concentrations of core metabolic proteins necessary for cell growth owing the accumulation of compatible osmolytes, as surmised previously. The finite capacity of membrane and cytoplasmic space also leads to the hypothesis that the species-specific differences in maximal growth rates likely reflect differences in the allocation of space to niche-specific proteins with the corresponding diminution of space devoted to other functions including proteins of core autotrophic metabolism, which drive cell reproduction. An optimization model for autotrophic microbial growth, the autotrophic replicator model, was developed based upon previous work investigating heterotrophic growth. The present model describes autotrophic growth in terms of the allocation protein resources among core functional groups including the photosynthetic electron transport chain, light-harvesting antennae, and the ribosome groups. PMID:25654078

  18. [Navigated implantation of total knee endoprostheses--a comparative study with conventional instrumentation].

    PubMed

    Jenny, J Y; Boeri, C

    2001-01-01

    A navigation system should improve the quality of a total knee prosthesis implantation in comparison to the classical, surgeon-controlled operative technique. The authors have implanted 40 knee total prostheses with an optical infrared navigation system (Orthopilot AESCULAP, Tuttlingen--group A). The quality of implantation was studied on postoperative long leg AP and lateral X-rays, and compared to a control group of 40 computer-paired total knee prostheses o the same model (Search Prosthesis, AESCULAP, Tuttlingen) implanted with a classical, surgeon-controlled technique (group B). An optimal mechanical femorotibial angle (3 degrees valgus to 3 degrees varus) was obtained by 33 cases in group A and 31 cases in group B (p > 0.05). Better results were seen for the coronal and sagittal orientation of both tibial and femoral components in group A. Globally, 26 cases of the group A and 12 cases of the group B were implanted in an optimal manner for all studied criteria (p < 0.01). The used navigation system allows a significant improvement of the quality of implantation of a knee total prosthesis in comparison to a classical, surgeon-controlled instrumentation. Long-term outcome could be consequently improved.

  19. Feature-space-based FMRI analysis using the optimal linear transformation.

    PubMed

    Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S

    2010-09-01

    The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.

  20. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  1. Global design of satellite constellations: a multi-criteria performance comparison of classical walker patterns and new design patterns

    NASA Astrophysics Data System (ADS)

    Lansard, Erick; Frayssinhes, Eric; Palmade, Jean-Luc

    Basically, the problem of designing a multisatellite constellation exhibits a lot of parameters with many possible combinations: total number of satellites, orbital parameters of each individual satellite, number of orbital planes, number of satellites in each plane, spacings between satellites of each plane, spacings between orbital planes, relative phasings between consecutive orbital planes. Hopefully, some authors have theoretically solved this complex problem under simplified assumptions: the permanent (or continuous) coverage by a single and multiple satellites of the whole Earth and zonal areas has been entirely solved from a pure geometrical point of view. These solutions exhibit strong symmetry properties (e.g. Walker, Ballard, Rider, Draim constellations): altitude and inclination are identical, orbital planes and satellites are regularly spaced, etc. The problem with such constellations is their oversimplified and restricted geometrical assumption. In fact, the evaluation function which is used implicitly only takes into account the point-to-point visibility between users and satellites and does not deal with very important constraints and considerations that become mandatory when designing a real satellite system (e.g. robustness to satellite failures, total system cost, common view between satellites and ground stations, service availability and satellite reliability, launch and early operations phase, production constraints, etc.). An original and global methodology relying on a powerful optimization tool based on genetic algorithms has been developed at ALCATEL ESPACE. In this approach, symmetrical constellations can be used as initial conditions of the optimization process together with specific evaluation functions. A multi-criteria performance analysis is conducted and presented here in a parametric way in order to identify and evaluate the main sensitive parameters. Quantitative results are given for three examples in the fields of navigation, telecommunication and multimedia satellite systems. In particular, a new design pattern with very efficient properties in terms of robustness to satellite failures is presented and compared with classical Walker patterns.

  2. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    PubMed

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.

  3. Contact-assisted protein structure modeling by global optimization in CASP11.

    PubMed

    Joo, Keehyoung; Joung, InSuk; Cheng, Qianyi; Lee, Sung Jong; Lee, Jooyoung

    2016-09-01

    We have applied the conformational space annealing method to the contact-assisted protein structure modeling in CASP11. For Tp targets, where predicted residue-residue contact information was provided, the contact energy term in the form of the Lorentzian function was implemented together with the physical energy terms used in our template-free modeling of proteins. Although we observed some structural improvement of Tp models over the models predicted without the Tp information, the improvement was not substantial on average. This is partly due to the inaccuracy of the provided contact information, where only about 18% of it was correct. For Ts targets, where the information of ambiguous NOE (Nuclear Overhauser Effect) restraints was provided, we formulated the modeling in terms of the two-tier optimization problem, which covers: (1) the assignment of NOE peaks and (2) the three-dimensional (3D) model generation based on the assigned NOEs. Although solving the problem in a direct manner appears to be intractable at first glance, we demonstrate through CASP11 that remarkably accurate protein 3D modeling is possible by brute force optimization of a relevant energy function. For 19 Ts targets of the average size of 224 residues, generated protein models were of about 3.6 Å Cα atom accuracy. Even greater structural improvement was observed when additional Tc contact information was provided. For 20 out of the total 24 Tc targets, we were able to generate protein structures which were better than the best model from the rest of the CASP11 groups in terms of GDT-TS. Proteins 2016; 84(Suppl 1):189-199. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. Sigma models with negative curvature

    DOE PAGES

    Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.

    2016-03-16

    Here, we construct Higgs Effective Field Theory (HEFT) based on the scalar manifold Hn, which is a hyperbolic space of constant negative curvature. The Lagrangian has a non-compact O(n, 1) global symmetry group, but it gives a unitary theory as long as only a compact subgroup of the global symmetry is gauged. Whether the HEFT manifold has positive or negative curvature can be tested by measuring the S-parameter, and the cross sections for longitudinal gauge boson and Higgs boson scattering, since the curvature (including its sign) determines deviations from Standard Model values.

  5. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  6. Algorithms for optimization of branching gravity-driven water networks

    NASA Astrophysics Data System (ADS)

    Dardani, Ian; Jones, Gerard F.

    2018-05-01

    The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.

  7. Towards the Development of a Global, Satellite-based, Terrestrial Snow Mission Planning Tool

    NASA Technical Reports Server (NTRS)

    Forman, Bart; Kumar, Sujay; Le Moigne, Jacqueline; Nag, Sreeja

    2017-01-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASAs Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical orbital configuration.One objective the proposed observing system simulation experiment (OSSE) is to assess the complementary or perhaps contradictory information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include:1. What observational records are needed (in space and time) to maximize terrestrial snow experimental utility?2. How might observations be coordinated (in space and time) to maximize utility? 3. What is the additional utility associated with an additional observation?4. How can future mission costs being minimized while ensuring Science requirements are fulfilled?

  8. Towards the Development of a Global, Satellite-Based, Terrestrial Snow Mission Planning Tool

    NASA Technical Reports Server (NTRS)

    Forman, Bart; Kumar, Sujay; Le Moigne, Jacqueline; Nag, Sreeja

    2017-01-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASA's Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT-C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical permutation. One objective of the proposed observing system simulation experiment (OSSE) is to assess the complementary or perhaps contradictory information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include: What observational records are needed (in space and time) to maximize terrestrial snow experimental utility? How might observations be coordinated (in space and time) to maximize this utility? What is the additional utility associated with an additional observation? How can future mission costs be minimized while ensuring Science requirements are fulfilled?

  9. Expected Characteristics of Global Wind Profile Measurements with a Scanning, Hybrid, Doppler Lidar System

    NASA Technical Reports Server (NTRS)

    Kavaya, Michael J.

    2008-01-01

    Over 20 years of investigation by NASA and NOAA scientists and Doppler lidar technologists into a global wind profiling mission from earth orbit have led to the current favored concept of an instrument with both coherent- and direct-detection pulsed Doppler lidars (i.e., a hybrid Doppler lidar) and a stepstare beam scanning approach covering several azimuth angles with a fixed nadir angle. The nominal lidar wavelengths are 2 microns for coherent detection, and 0.355 microns for direct detection. The two agencies have also generated two sets of sophisticated wind measurement requirements for a space mission: science demonstration requirements and operational requirements. The requirements contain the necessary details to permit mission design and optimization by lidar technologists. Simulations have been developed that connect the science requirements to the wind measurement requirements, and that connect the wind measurement requirements to the Doppler lidar parameters. The simulations also permit trade studies within the multi-parameter space. These tools, combined with knowledge of the state of the Doppler lidar technology, have been used to conduct space instrument and mission design activities to validate the feasibility of the chosen mission and lidar parameters. Recently, the NRC Earth Science Decadal Survey recommended the wind mission to NASA as one of 15 recommended missions. A full description of the wind measurement product from these notional missions and the possible trades available are presented in this paper.

  10. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    PubMed Central

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855

  11. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    PubMed

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  12. Electrodynamic Dust Shield for Space Applications

    NASA Technical Reports Server (NTRS)

    Mackey, Paul J.; Johansen, Michael R.; Olsen, Robert C.; Raines, Matthew G.; Phillips, James R., III; Cox, Rachel E.; Hogue, Michael D.; Pollard, Jacob R. S.; Calle, Carlos I.

    2016-01-01

    Dust mitigation technology has been highlighted by NASA and the International Space Exploration Coordination Group (ISECG) as a Global Exploration Roadmap (GER) critical technology need in order to reduce life cycle cost and risk, and increase the probability of mission success. The Electrostatics and Surface Physics Lab in Swamp Works at the Kennedy Space Center has developed an Electrodynamic Dust Shield (EDS) to remove dust from multiple surfaces, including glass shields and thermal radiators. Further development is underway to improve the operation and reliability of the EDS as well as to perform material and component testing outside of the International Space Station (ISS) on the Materials on International Space Station Experiment (MISSE). This experiment is designed to verify that the EDS can withstand the harsh environment of space and will look to closely replicate the solar environment experienced on the Moon.

  13. An International Disaster Management SensorWeb Consisting of Space-based and Insitu Sensors

    NASA Astrophysics Data System (ADS)

    Mandl, D.; Frye, S. W.; Policelli, F. S.; Cappelaere, P. G.

    2009-12-01

    For the past year, NASA along with partners consisting of the United Nations Space-based Information for Disaster and Emergency Response (UN-SPIDER) office, the Canadian Space Agency, the Ukraine Space Research Institute (SRI), Taiwan National Space Program Office (NSPO) and in conjunction with the Committee on Earth Observing Satellite (CEOS) Working Group on Information Systems and Services (WGISS) have been conducting a pilot project to automate the process of obtaining sensor data for the purpose of flood management and emergency response. This includes experimenting with flood prediction models based on numerous meteorological satellites and a global hydrological model and then automatically triggering follow up high resolution satellite imagery with rapid delivery of data products. This presentation will provide a overview of the effort, recent accomplishments and future plans.

  14. Globally optimal trial design for local decision making.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2009-02-01

    Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials. Copyright (c) 2008 John Wiley & Sons, Ltd.

  15. Toward Global Harmonization of Derived Cloud Products

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Baum, Bryan A.; Choi, Yong-Sang; Foster, Michael J.; Karlsson, Karl-Goeran; Heidinger, Andrew; Poulsen, Caroline; Pavolonis, Michael; Riedi, Jerome; Roebeling, Robert

    2017-01-01

    Formerly known as the Cloud Retrieval Evaluation Workshop (CREW; see the list of acronyms used in this paper below) group (Roebeling et al. 2013, 2015), the International Cloud Working Group (ICWG) was created and endorsed during the 42nd Meeting of CGMS. The CGMS-ICWG provides a forum for space agencies to seek coherent progress in science and applications and also to act as a bridge between space agencies and the cloud remote sensing and applications community. The ICWG plans to serve as a forum to exchange and enhance knowledge on state-of-the-art cloud parameter retrievals algorithms, to stimulate support for training in the use of cloud parameters, and to encourage space agencies and the cloud remote sensing community to share knowledge. The ICWG plans to prepare recommendations to guide the direction of future research-for example, on observing severe weather events or on process studies-and to influence relevant programs of the WMO, WCRP, GCOS, and the space agencies.

  16. The Application of the SPASE Metadata Standard in the U.S. and Worldwide

    NASA Astrophysics Data System (ADS)

    Thieman, J. R.; King, T. A.; Roberts, D.

    2012-12-01

    The Space Physics Archive Search and Extract (SPASE) Metadata standard for Heliophysics and related data is now an established standard within the NASA-funded space and solar physics community and is spreading to the international groups within that community. Development of SPASE had involved a number of international partners and the current version of the SPASE Metadata Model (version 2.2.2) has not needed any structural modifications since January 2011 . The SPASE standard has been adopted by groups such as NASA's Heliophysics division, the Canadian Space Science Data Portal (CSSDP), Canada's AUTUMN network, Japan's Inter-university Upper atmosphere Global Observation NETwork (IUGONET), Centre de Données de la Physique des Plasmas (CDPP), and the near-Earth space data infrastructure for e-Science (ESPAS). In addition, portions of the SPASE dictionary have been modeled in semantic web ontologies for use with reasoners and semantic searches. While we anticipate additional modifications to the model in the future to accommodate simulation and model data, these changes will not affect the data descriptions already generated for instrument-related datasets. Examples of SPASE descriptions can be viewed at http://www.spase-group.org/registry/explorer and data can be located using SPASE concepts by searching the Virtual Space Physics Observatory (http://vspo.gsfc.nasa.gov/websearch/dispatcher) for data of interest.

  17. The Future of Ground Magnetometer Arrays in Support of Space Weather Monitoring and Research

    NASA Astrophysics Data System (ADS)

    Engebretson, Mark; Zesta, Eftyhia

    2017-11-01

    A community workshop was held in Greenbelt, Maryland, on 5-6 May 2016 to discuss recommendations for the future of ground magnetometer array research in space physics. The community reviewed findings contained in the 2016 Geospace Portfolio Review of the Geospace Section of the Division of Atmospheric and Geospace Science of the National Science Foundation and discussed the present state of ground magnetometer arrays and possible pathways for a more optimal, robust, and effective organization and scientific use of these ground arrays. This paper summarizes the report of that workshop to the National Science Foundation (Engebretson & Zesta, as well as conclusions from two follow-up meetings. It describes the current state of U.S.-funded ground magnetometer arrays and summarizes community recommendations for changes in both organizational and funding structures. It also outlines a variety of new and/or augmented regional and global data products and visualizations that can be facilitated by increased collaboration among arrays. Such products will enhance the value of ground-based magnetometer data to the community's effort for understanding of Earth's space environment and space weather effects.

  18. A neural-network-based exponential H∞ synchronisation for chaotic secure communication via improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hsiao, Feng-Hsiag

    2016-10-01

    In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.

  19. Experimental investigation on ignition schemes of partially covered cavities in a supersonic flow

    NASA Astrophysics Data System (ADS)

    Cai, Zun; Sun, Mingbo; Wang, Hongbo; Wang, Zhenguo

    2016-04-01

    In this study, ignition schemes of the partially covered cavity in a scramjet combustor were investigated under inflow conditions of Ma=2.1 with stagnation pressure P0=0.7 Mpa and stagnation temperature T0=947 K. It reveals that the ignition scheme of the partially covered cavity has a great impact on the ignition and flame stabilization process. There always exists an optimized global equivalence ratio of a fixed ignition scheme, and the optimized global equivalence ratio of ignition in the partially covered cavity is lower than that of the uncovered cavity. For tandem dual-cavities, ignition in the partially covered cavity could be enhanced with the optimization of global equivalence ratio. However, ignition in the partially covered cavity would be exacerbated with further increasing the global equivalence ratio. The global equivalence ratio and the jet penetration height have a strong coupling with the combustion flow-field. For multi-cavities, it is assured that fuel injection on the opposite side could hardly be ignited after ignition in the partially covered cavity even with the optimized global equivalence ratio. It is possible to realize ignition enhancement in the partially covered cavity with the optimization of global equivalence ratio, but it is not beneficial for thrust increment during the steady combustion process.

  20. Maxwell Strata and Cut Locus in the Sub-Riemannian Problem on the Engel Group

    NASA Astrophysics Data System (ADS)

    Ardentov, Andrei A.; Sachkov, Yuri L.

    2017-12-01

    We consider the nilpotent left-invariant sub-Riemannian structure on the Engel group. This structure gives a fundamental local approximation of a generic rank 2 sub-Riemannian structure on a 4-manifold near a generic point (in particular, of the kinematic models of a car with a trailer). On the other hand, this is the simplest sub-Riemannian structure of step three. We describe the global structure of the cut locus (the set of points where geodesics lose their global optimality), the Maxwell set (the set of points that admit more than one minimizer), and the intersection of the cut locus with the caustic (the set of conjugate points along all geodesics). The group of symmetries of the cut locus is described: it is generated by a one-parameter group of dilations R+ and a discrete group of reflections Z2 × Z2 × Z2. The cut locus admits a stratification with 6 three-dimensional strata, 12 two-dimensional strata, and 2 one-dimensional strata. Three-dimensional strata of the cut locus are Maxwell strata of multiplicity 2 (for each point there are 2 minimizers). Two-dimensional strata of the cut locus consist of conjugate points. Finally, one-dimensional strata are Maxwell strata of infinite multiplicity, they consist of conjugate points as well. Projections of sub-Riemannian geodesics to the 2-dimensional plane of the distribution are Euler elasticae. For each point of the cut locus, we describe the Euler elasticae corresponding to minimizers coming to this point. Finally, we describe the structure of the optimal synthesis, i. e., the set of minimizers for each terminal point in the Engel group.

  1. Aerostructural Shape and Topology Optimization of Aircraft Wings

    NASA Astrophysics Data System (ADS)

    James, Kai

    A series of novel algorithms for performing aerostructural shape and topology optimization are introduced and applied to the design of aircraft wings. An isoparametric level set method is developed for performing topology optimization of wings and other non-rectangular structures that must be modeled using a non-uniform, body-fitted mesh. The shape sensitivities are mapped to computational space using the transformation defined by the Jacobian of the isoparametric finite elements. The mapped sensitivities are then passed to the Hamilton-Jacobi equation, which is solved on a uniform Cartesian grid. The method is derived for several objective functions including mass, compliance, and global von Mises stress. The results are compared with SIMP results for several two-dimensional benchmark problems. The method is also demonstrated on a three-dimensional wingbox structure subject to fixed loading. It is shown that the isoparametric level set method is competitive with the SIMP method in terms of the final objective value as well as computation time. In a separate problem, the SIMP formulation is used to optimize the structural topology of a wingbox as part of a larger MDO framework. Here, topology optimization is combined with aerodynamic shape optimization, using a monolithic MDO architecture that includes aerostructural coupling. The aerodynamic loads are modeled using a three-dimensional panel method, and the structural analysis makes use of linear, isoparametric, hexahedral elements. The aerodynamic shape is parameterized via a set of twist variables representing the jig twist angle at equally spaced locations along the span of the wing. The sensitivities are determined analytically using a coupled adjoint method. The wing is optimized for minimum drag subject to a compliance constraint taken from a 2 g maneuver condition. The results from the MDO algorithm are compared with those of a sequential optimization procedure in order to quantify the benefits of the MDO approach. While the sequentially optimized wing exhibits a nearly-elliptical lift distribution, the MDO design seeks to push a greater portion of the load toward the root, thus reducing the structural deflection, and allowing for a lighter structure. By exploiting this trade-off, the MDO design achieves a 42% lower drag than the sequential result.

  2. The International Lunar Decade Declaration

    NASA Astrophysics Data System (ADS)

    Beldavs, V.; Foing, B.; Bland, D.; Crisafulli, J.

    2015-10-01

    The International Lunar Decade Declaration was discussed at the conference held November 9-13, 2014 in Hawaii "The Next Giant Leap: Leveraging Lunar Assets for Sustainable Pathways to Space" - http://2014giantleap.aerospacehawaii.info/ and accepted by a core group that forms the International Lunar Decade Working Group (ILDWG) that is seeking to make the proposed global event and decade long process a reality. The Declaration will be updated from time to time by members of the ILDWreflecting new knowledge and fresh perspectives that bear on building a global consortium with a mission to progress from lunar exploration to the transformation of the Moon into a wealth gene rating platform for the expansion of humankind into the solar system. When key organizations have endorsed the idea and joined the effort the text of the Declaration will be considered final. An earlier International Lunar Decade proposal was issued at the 8th ICEUM Conference in 2006 in Beijing together with 13 specific initiatives for lunar exploration[1,2,3]. These initiatives have been largely implemented with coordination among the different space agencies involved provided by the International Lunar Exploration Working Group[2,3]. The Second International Lunar Decade from 2015 reflects current trends towards increasing involvement of commercial firms in space, particularly seeking opportunities beyond low Earth orbit. The central vision of the International Lunar Decade is to build the foundations for a sustainable space economy through international collaboration concurrently addressing Lunar exploration and building a shared knowledge base;Policy development that enables collabo rative research and development leading to lunar mining and industrial and commercial development;Infrastructure on the Moon and in cislunar space (communications, transport, energy systems, way-stations, other) that reduces costs, lowers risks and speeds up the time to profitable operations;Enabling technologies needed for lunar operations (robotic and human), lunar mining, materials processing, manufacturing, transportation, life support and other.

  3. Advanced Ionospheric Sensing using GROUP-C and LITES aboard the ISS

    NASA Astrophysics Data System (ADS)

    Budzien, S. A.; Stephan, A. W.; Chakrabarti, S.; Finn, S. C.; Cook, T.; Powell, S. P.; O'Hanlon, B.; Bishop, R. L.

    2015-12-01

    The GPS Radio Occultation and Ultraviolet Photometer Co-located (GROUP-C) and Limb-imaging Ionospheric and Thermospheric Extreme-ultraviolet Spectrograph (LITES) experiments are manifested for flight aboard the International Space Station (ISS) in 2016 as part of the Space Test Program Houston #5 payload. The two experiments provide technical development and risk-reduction for future DoD space weather sensors suitable for ionospheric specification, space situational awareness, and data products for global ionosphere assimilative models. In addition, the combined instrument complement of these two experiments offers a unique opportunity to study structures of the nighttime ionosphere. GROUP-C includes an advanced GPS receiver providing ionospheric electron density profiles and scintillation measurements and a high-sensitivity far-ultraviolet photometer measuring horizontal ionospheric gradients. LITES is an imaging spectrograph that spans 60-140 nm and will obtain high-cadence limb profiles of the ionosphere and thermosphere from 150-350 km altitude. In the nighttime ionosphere, recombination of O+ and electrons produces optically thin emissions at 91.1 and 135.6 nm that can be used to tomographically reconstruct the two-dimensional plasma distribution in the orbital plane below ISS altitudes. Ionospheric irregularities, such as plasma bubbles and blobs, are transient features of the low and middle latitude ionosphere with important implications for operational systems. Irregularity structures have been studied primarily using ground-based systems, though some spaced-based remote and in-situ sensing has been performed. An ionospheric observatory aboard the ISS would provide new capability to study low- and mid-latitude ionospheric structures on a global scale. By combining for the first time high-sensitivity in-track photometry, vertical ionospheric airglow spectrographic imagery, and recent advancements in UV tomography, high-fidelity tomographic reconstruction of nighttime structures can be performed from the ISS. We discuss the tomographic approach, simulated reconstructions, and value added by including complementary ground-based observations. Acknowledgements: This work is supported by NRL Work Unit 76-1C09-05.

  4. Pediatric resuscitation training-instruction all at once or spaced over time?

    PubMed

    Patocka, Catherine; Khan, Farooq; Dubrovsky, Alexander Sasha; Brody, Danny; Bank, Ilana; Bhanji, Farhan

    2015-03-01

    Healthcare providers demonstrate limited retention of knowledge and skills in the months following completion of a resuscitation course. Resuscitation courses are typically taught in a massed format (over 1-2 days) however studies in education psychology have suggested that spacing training may result in improved learning and retention. Our study explored the impact of spaced instruction compared to traditional massed instruction on learner knowledge and pediatric resuscitation skills. Medical students completed a pediatric resuscitation course in either a spaced or massed format. Four weeks following course completion students completed a knowledge exam and blinded observers used expert-developed checklists to assess student performance of three skills (bag-valve mask ventilation (BVMV), intra-osseous insertion (IOI) and chest compressions (CC)). Forty-five out of 48 students completed the study protocol. Students in both groups had similar scores on the knowledge exam spaced: (37.8±6.1) vs. massed (34.3±7.6)(p<0.09) and overall global rating scale scores for IOI, BVMV and CC; however students in the spaced group also performed critical procedural elements more frequently than those in the massed training group Learner knowledge and performance of procedural skills in pediatric resuscitation taught in a spaced format is at least as good as learning in a massed format. Procedures learned in a spaced format may result in better retention of skills when compared to massed training. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Space Weather Workshop 2010 to Be Held in April

    NASA Astrophysics Data System (ADS)

    Peltzer, Thomas

    2010-03-01

    The annual Space Weather Workshop will be held in Boulder, Colo., 27-30 April 2010. The workshop will bring customers, forecasters, commercial service providers, researchers, and government agencies together in a lively dialogue about space weather. The workshop will include 4 days of plenary sessions on a variety of topics, with poster sessions focusing on the Sun, interplanetary space, the magnetosphere, and the ionosphere. The conference will address the remarkably diverse impacts of space weather on today's technology. Highlights on this year's agenda include ionospheric storms and their impacts on the Global Navigation Satellite System (GNSS), an update on NASA's recently launched Solar Dynamics Observatory (SDO), and new space weather-related activities in the Federal Emergency Management Agency (FEMA). Also this year, the Commercial Space Weather Interest Group will feature a presentation by former NOAA administrator, Vice Admiral Conrad Lautenbacher, U.S. Navy (Ret.).

  6. KSC-2011-3138

    NASA Image and Video Library

    2011-04-27

    CAPE CANAVERAL, Fla. -- In the Press Site bull pen at NASA's Kennedy Space Center in Florida, NASA's Associate Administrator for Education Leland Melvin talks about the LEGO sets going up to the International Space Station (ISS) aboard space shuttle Endeavour's STS-134 mission. NASA and The LEGO Group will send 23 LEGO sets to the station and some of those sets include a space shuttle, an ISS model, a Global Positioning Satellite and NASA's Hubble Space Telescope. The sets will be used for NASA's Teaching From Space Project, which is part of a three-year Space Act Agreement with the toy maker to spark the interest of children in science, technology, engineering and mathematics (STEM). Liftoff is scheduled for April 29 at 3:47 p.m. EDT. This will be the final spaceflight for Endeavour. For more information visit, www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts134/index.html. Photo credit: NASA/Frankie Martin

  7. A Mission Concept Based on the ISECG Human Lunar Surface Architecture

    NASA Technical Reports Server (NTRS)

    Gruener, J. E.; Lawrence, S. J.

    2017-01-01

    The National Aeronautics and Space Administration (NASA) is participating in the International Space Exploration Coordination Group (ISECG), working together with 13 other space agencies to advance a long-range human space exploration strategy. The ISECG has developed a Global Exploration Roadmap (GER) that reflects the coordinated international dialog and continued preparation for exploration beyond low-Earth orbit - beginning with the International Space Station (ISS) and continuing to the Moon, near-Earth asteroids, and Mars [1]. The roadmap demonstrates how initial capabilities can enable a variety of missions in the lunar vicinity, responding to individual and common goals and objectives, while contributing to building partnerships required for sustainable human space exploration that delivers value to the public. The current GER includes three different near-term themes: exploration of a near-Earth asteroid, extended duration crew missions in cis-lunar space, and humans to the lunar surface.

  8. Global Trends in Space Access and Utilization

    NASA Technical Reports Server (NTRS)

    Rahman, Shamim A.; Keim, Nicholas S.; Zeender, Peter E.

    2010-01-01

    In the not-so-distant past, space access and air/space technology superiority were within the purview of the U.S. and former Soviet Union's respective space agencies, both vying for global leadership in space exploitation. In more recent years, with the emergence of the European Space Agency (ESA) member countries and Asian countries joining the family of space-faring nations, it is truer now more than ever that space access and utilization has become a truly global enterprise. In fact, according to the Space Report 2007, this enterprise is a $251-billion economy. It is possible to gauge the vitality of worldwide efforts from open sources in today's transparent, media-based society. In particular, print and web broadcasters regularly report and catalog global space activities for defense and civil purposes. For the purposes of this paper, a representative catalog of missions is used to illustrate the nature of the emerging "globalization." This paper highlights global trends in terms of not only the providers of space access, but also the end-users for the various recently accomplished missions. With well over 50 launches per year, in recent years, the launch-log reveals a surprising percentage of "cooperative or co-dependent missions" where different agencies, countries, and/or commercial entities are so engaged presumably to the benefit of all who participate. Statistics are cited and used to show that recently over d0% of the 50-plus missions involved multiple nations working collectively to deliver payloads to orbit. Observers, space policy professionals, and space agency leaders have eloquently proposed that it might require the combined resources and talents of multiple nations to advance human exploration goals beyond low earth orbit. This paper does not intend to offer new information with respect to whether international collaboration is necessary but to observe that, in continuing to monitor global trends, the results seem to support the thesis that a global interdependent effort with all its likely complexities is an increasingly viable and pragmatic option. The discussion includes a breakdown of space missions into those of civil (scientific), military, and strictly commercial nature. It concludes that all three are robust components of a globally diversified portfolio of activities relying, essentially, on a common space industrial base and space infrastructure. As in other industries, the distribution of space industry assets and knowledge across countries and continents enables a diverse suite of options and arrangements, particularly in the areas of civil and commercial space utilization. A survey of several ongoing bilateral and multilateral space collaboration examples are provided to augment the observations regarding multinational work in space.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.

    We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less

  10. A scientific assessment of a new technology orbital telescope

    NASA Technical Reports Server (NTRS)

    1995-01-01

    As part of a program designed to test the Alpha chemical laser weapons system in space, the Ballistic Missile Defense Organization (BMDO) developed components of an agile, lightweight, 4-meter telescope, equipped with an advanced active-optics system. BMDO had proposed to make space available in the telescope's focal plane for instrumentation optimized for scientific applications in astrophysics and planetary astronomy for a potential flight mission. Such a flight mission could be undertaken if new or additional sponsorship can be found. Despite this uncertainty, BMDO requested assistance in defining the instrumentation and other design aspects necessary to enhance the scientific value of a pointing and tracking mission. In response to this request, the Space Studies Board established the Task Group on BMDO New Technology Orbital Observatory (TGBNTOO) and charged it to: (1) provide instrumentation, data management, and science-operations advice to BMDO to optimize the scientific value of a 4-meter mission; and (2) support a space studies board assessment of the relative scientific merit of the program. This report deals with the first of these tasks, assisting the Advanced Technology Demonstrator's (ATD's) program scientific potential. Given the potential scientific aspects of the 4-meter telescope, this project is referred to as the New Technology Orbital Telescope (NTOT), or as the ATD/NTOT, to emphasize its dual-use character. The task group's basic conclusion is that the ATD/NTOT mission does have the potential for contributing in a major way to astronomical goals.

  11. Groupwork with Learning Disabilities: Creative Drama. A Winslow Practical Manual.

    ERIC Educational Resources Information Center

    Chesner, Anna

    This British book discusses the value of creative drama for people with learning disabilities, offers some basic principles of working with people with learning disabilities, and describes a variety of approaches to drama. An introduction discusses the optimal size of a creative drama group, the kind of work space needed, equipment, membership,…

  12. External tank program productivity and quality enhancements - A case study

    NASA Technical Reports Server (NTRS)

    Locke, S. R.

    1988-01-01

    The optimization of the Manned Space Systems productivity and quality enhancement program is described. The guidelines and standards of the program and the roles played in it by various employee groups are addressed. Aspects of the program involving job/organization redefinition, production and planning in automation, and artificial intelligence and robotics are examined.

  13. Micro-Arcsec mission: implications of the monitoring, diagnostic and calibration of the instrument response in the data reduction chain. .

    NASA Astrophysics Data System (ADS)

    Busonero, D.; Gai, M.

    The goals of 21st century high angular precision experiments rely on the limiting performance associated to the selected instrumental configuration and observational strategy. Both global and narrow angle micro-arcsec space astrometry require that the instrument contributions to the overall error budget has to be less than the desired micro-arcsec level precision. Appropriate modelling of the astrometric response is required for optimal definition of the data reduction and calibration algorithms, in order to ensure high sensitivity to the astrophysical source parameters and in general high accuracy. We will refer to the framework of the SIM-Lite and the Gaia mission, the most challenging space missions of the next decade in the narrow angle and global astrometry field, respectively. We will focus our dissertation on the Gaia data reduction issues and instrument calibration implications. We describe selected topics in the framework of the Astrometric Instrument Modelling for the Gaia mission, evidencing their role in the data reduction chain and we give a brief overview of the Astrometric Instrument Model Data Analysis Software System, a Java-based pipeline under development by our team.

  14. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  15. Parameter Trade Studies For Coherent Lidar Wind Measurements of Wind from Space

    NASA Technical Reports Server (NTRS)

    Kavaya, Michael J.; Frehlich, Rod G.

    2007-01-01

    The design of an orbiting wind profiling lidar requires selection of dozens of lidar, measurement scenario, and mission geometry parameters; in addition to prediction of atmospheric parameters. Typical mission designs do not include a thorough trade optimization of all of these parameters. We report here the integration of a recently published parameterization of coherent lidar wind velocity measurement performance with an orbiting coherent wind lidar computer simulation; and the use of these combined tools to perform some preliminary parameter trades. We use the 2006 NASA Global Wind Observing Sounder mission design as the starting point for the trades.

  16. On the extensible viscoelastic beam

    NASA Astrophysics Data System (ADS)

    Giorgi, Claudio; Pata, Vittorino; Vuk, Elena

    2008-04-01

    This work is focused on the equation \\[ \\begin{eqnarray*}\\fl {\\partial_{tt}} u+\\partial_{xxxx}u +\\int_0^\\infty \\mu(s) \\partial_{xxxx}[u(t)-u(t-s)]\\,\\rmd s\\\\ - \\big(\\beta+\\|\\partial_x u\\|_{L^2(0,1)}^2\\big)\\partial_{xx}u= f\\end{eqnarray*} \\] describing the motion of an extensible viscoelastic beam. Under suitable boundary conditions, the related dynamical system in the history space framework is shown to possess a global attractor of optimal regularity. The result is obtained by exploiting an appropriate decomposition of the solution semigroup, together with the existence of a Lyapunov functional.

  17. Accelerated search for materials with targeted properties by adaptive design

    PubMed Central

    Xue, Dezhen; Balachandran, Prasanna V.; Hogden, John; Theiler, James; Xue, Deqing; Lookman, Turab

    2016-01-01

    Finding new materials with targeted properties has traditionally been guided by intuition, and trial and error. With increasing chemical complexity, the combinatorial possibilities are too large for an Edisonian approach to be practical. Here we show how an adaptive design strategy, tightly coupled with experiments, can accelerate the discovery process by sequentially identifying the next experiments or calculations, to effectively navigate the complex search space. Our strategy uses inference and global optimization to balance the trade-off between exploitation and exploration of the search space. We demonstrate this by finding very low thermal hysteresis (ΔT) NiTi-based shape memory alloys, with Ti50.0Ni46.7Cu0.8Fe2.3Pd0.2 possessing the smallest ΔT (1.84 K). We synthesize and characterize 36 predicted compositions (9 feedback loops) from a potential space of ∼800,000 compositions. Of these, 14 had smaller ΔT than any of the 22 in the original data set. PMID:27079901

  18. Towards causal patch physics in dS/CFT

    NASA Astrophysics Data System (ADS)

    Neiman, Yasha

    2018-01-01

    This contribution is a status report on a research program aimed at obtaining quantum-gravitational physics inside a cosmological horizon through dS/CFT, i.e. through a holographic description at past/future infinity of de Sitter space. The program aims to bring together two main elements. The first is the observation by Anninos, Hartman and Strominger that Vasiliev's higher-spin gravity provides a working model for dS/CFT in 3+1 dimensions. The second is the proposal by Parikh, Savonije and Verlinde that dS/CFT may prove more tractable if one works in so-called "elliptic" de Sitter space - a folded-in-half version of global de Sitter where antipodal points have been identified. We review some relevant progress concerning quantum field theory on elliptic de Sitter space, higher-spin gravity and its holographic duality with a free vector model. We present our reasons for optimism that the approach outlined here will lead to a full holographic description of quantum (higher-spin) gravity in the causal patch of a de Sitter observer.

  19. Geometrical-Based Navigation System Performance Assessment in the Space Service Volume Using a Multiglobal Navigation Satellite System Methodology

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user in the Space Service Volume (SSV) when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The first phase of that increasing complexity and fidelity analysis initiative is based on a pure geometrically-derived access technique. The first phase of analysis has been completed, and the results are documented in this paper.

  20. Teledesic Global Wireless Broadband Network: Space Infrastructure Architecture, Design Features and Technologies

    NASA Technical Reports Server (NTRS)

    Stuart, James R.

    1995-01-01

    The Teledesic satellites are a new class of small satellites which demonstrate the important commercial benefits of using technologies developed for other purposes by U.S. National Laboratories. The Teledesic satellite architecture, subsystem design features, and new technologies are described. The new Teledesic satellite manufacturing, integration, and test approaches which use modern high volume production techniques and result in surprisingly low space segment costs are discussed. The constellation control and management features and attendant software architecture features are addressed. After briefly discussing the economic and technological impact on the USA commercial space industries of the space communications revolution and such large constellation projects, the paper concludes with observations on the trend toward future system architectures using networked groups of much smaller satellites.

Top